|
| 1 | +# OpenEvolve Examples |
| 2 | + |
| 3 | +This directory contains a collection of examples demonstrating how to use OpenEvolve for various tasks including optimization, algorithm discovery, and code evolution. Each example showcases different aspects of OpenEvolve's capabilities and provides templates for creating your own evolutionary coding projects. |
| 4 | + |
| 5 | +## Quick Start Template |
| 6 | + |
| 7 | +To create your own OpenEvolve example, you need three essential components: |
| 8 | + |
| 9 | +### 1. Initial Program (`initial_program.py`) |
| 10 | + |
| 11 | +Your initial program must contain exactly **one** `EVOLVE-BLOCK`: |
| 12 | + |
| 13 | +```python |
| 14 | +# EVOLVE-BLOCK-START |
| 15 | +def your_function(): |
| 16 | + # Your initial implementation here |
| 17 | + # This is the only section OpenEvolve will modify |
| 18 | + pass |
| 19 | +# EVOLVE-BLOCK-END |
| 20 | + |
| 21 | +# Helper functions and other code outside the evolve block |
| 22 | +def helper_function(): |
| 23 | + # This code won't be modified by OpenEvolve |
| 24 | + pass |
| 25 | +``` |
| 26 | + |
| 27 | +**Critical Requirements:** |
| 28 | +- ✅ **Exactly one EVOLVE-BLOCK** (not multiple blocks) |
| 29 | +- ✅ Use `# EVOLVE-BLOCK-START` and `# EVOLVE-BLOCK-END` markers |
| 30 | +- ✅ Put only the code you want evolved inside the block |
| 31 | +- ✅ Helper functions and imports go outside the block |
| 32 | + |
| 33 | +### 2. Evaluator (`evaluator.py`) |
| 34 | + |
| 35 | +Your evaluator must return a **dictionary** with specific metric names: |
| 36 | + |
| 37 | +```python |
| 38 | +def evaluate(program_path: str) -> Dict: |
| 39 | + """ |
| 40 | + Evaluate the program and return metrics as a dictionary. |
| 41 | + |
| 42 | + CRITICAL: Must return a dictionary, not an EvaluationResult object. |
| 43 | + """ |
| 44 | + try: |
| 45 | + # Import and run your program |
| 46 | + # Calculate metrics |
| 47 | + |
| 48 | + return { |
| 49 | + 'combined_score': 0.8, # PRIMARY METRIC for evolution (required) |
| 50 | + 'accuracy': 0.9, # Your custom metrics |
| 51 | + 'speed': 0.7, |
| 52 | + 'robustness': 0.6, |
| 53 | + # Add any other metrics you want to track |
| 54 | + } |
| 55 | + except Exception as e: |
| 56 | + return { |
| 57 | + 'combined_score': 0.0, # Always return combined_score, even on error |
| 58 | + 'error': str(e) |
| 59 | + } |
| 60 | +``` |
| 61 | + |
| 62 | +**Critical Requirements:** |
| 63 | +- ✅ **Return a dictionary**, not `EvaluationResult` object |
| 64 | +- ✅ **Must include `'combined_score'`** - this is the primary metric OpenEvolve uses |
| 65 | +- ✅ Higher `combined_score` values should indicate better programs |
| 66 | +- ✅ Handle exceptions and return `combined_score: 0.0` on failure |
| 67 | + |
| 68 | +### 3. Configuration (`config.yaml`) |
| 69 | + |
| 70 | +Essential configuration structure: |
| 71 | + |
| 72 | +```yaml |
| 73 | +# Evolution settings |
| 74 | +max_iterations: 100 |
| 75 | +checkpoint_interval: 10 |
| 76 | +parallel_evaluations: 1 |
| 77 | + |
| 78 | +# LLM configuration |
| 79 | +llm: |
| 80 | + api_base: "https://api.openai.com/v1" # Or your LLM provider |
| 81 | + models: |
| 82 | + - name: "gpt-4" |
| 83 | + weight: 1.0 |
| 84 | + temperature: 0.7 |
| 85 | + max_tokens: 4000 |
| 86 | + timeout: 120 |
| 87 | + |
| 88 | +# Database configuration (MAP-Elites algorithm) |
| 89 | +database: |
| 90 | + population_size: 50 |
| 91 | + num_islands: 3 |
| 92 | + migration_interval: 10 |
| 93 | + feature_dimensions: # MUST be a list, not an integer |
| 94 | + - "score" |
| 95 | + - "complexity" |
| 96 | + |
| 97 | +# Evaluation settings |
| 98 | +evaluator: |
| 99 | + timeout: 60 |
| 100 | + max_retries: 3 |
| 101 | + |
| 102 | +# Prompt configuration |
| 103 | +prompt: |
| 104 | + system_message: | |
| 105 | + You are an expert programmer. Your goal is to improve the code |
| 106 | + in the EVOLVE-BLOCK to achieve better performance on the task. |
| 107 | + |
| 108 | + Focus on algorithmic improvements and code optimization. |
| 109 | + num_top_programs: 3 |
| 110 | + num_diverse_programs: 2 |
| 111 | + |
| 112 | +# Logging |
| 113 | +log_level: "INFO" |
| 114 | +``` |
| 115 | +
|
| 116 | +**Critical Requirements:** |
| 117 | +- ✅ **`feature_dimensions` must be a list** (e.g., `["score", "complexity"]`), not an integer |
| 118 | +- ✅ Set appropriate timeouts for your use case |
| 119 | +- ✅ Configure LLM settings for your provider |
| 120 | +- ✅ Use meaningful `system_message` to guide evolution |
| 121 | + |
| 122 | +## Common Configuration Mistakes |
| 123 | + |
| 124 | +❌ **Wrong:** `feature_dimensions: 2` |
| 125 | +✅ **Correct:** `feature_dimensions: ["score", "complexity"]` |
| 126 | + |
| 127 | +❌ **Wrong:** Returning `EvaluationResult` object |
| 128 | +✅ **Correct:** Returning `{'combined_score': 0.8, ...}` dictionary |
| 129 | + |
| 130 | +❌ **Wrong:** Using `'total_score'` metric name |
| 131 | +✅ **Correct:** Using `'combined_score'` metric name |
| 132 | + |
| 133 | +❌ **Wrong:** Multiple EVOLVE-BLOCK sections |
| 134 | +✅ **Correct:** Exactly one EVOLVE-BLOCK section |
| 135 | + |
| 136 | +## Running Your Example |
| 137 | + |
| 138 | +```bash |
| 139 | +# Basic run |
| 140 | +python openevolve-run.py path/to/initial_program.py path/to/evaluator.py --config path/to/config.yaml --iterations 100 |
| 141 | +
|
| 142 | +# Resume from checkpoint |
| 143 | +python openevolve-run.py path/to/initial_program.py path/to/evaluator.py \ |
| 144 | + --config path/to/config.yaml \ |
| 145 | + --checkpoint path/to/checkpoint_directory \ |
| 146 | + --iterations 50 |
| 147 | +
|
| 148 | +# View results |
| 149 | +python scripts/visualizer.py --path path/to/openevolve_output/checkpoints/checkpoint_100/ |
| 150 | +``` |
| 151 | + |
| 152 | +## Advanced Configuration Options |
| 153 | + |
| 154 | +### LLM Ensemble (Multiple Models) |
| 155 | +```yaml |
| 156 | +llm: |
| 157 | + models: |
| 158 | + - name: "gpt-4" |
| 159 | + weight: 0.7 |
| 160 | + - name: "claude-3-sonnet" |
| 161 | + weight: 0.3 |
| 162 | +``` |
| 163 | + |
| 164 | +### Island Evolution (Population Diversity) |
| 165 | +```yaml |
| 166 | +database: |
| 167 | + num_islands: 5 # More islands = more diversity |
| 168 | + migration_interval: 15 # How often islands exchange programs |
| 169 | + population_size: 100 # Larger population = more exploration |
| 170 | +``` |
| 171 | + |
| 172 | +### Cascade Evaluation (Multi-Stage Testing) |
| 173 | +```yaml |
| 174 | +evaluator: |
| 175 | + cascade_stages: |
| 176 | + - stage1_timeout: 30 # Quick validation |
| 177 | + - stage2_timeout: 120 # Full evaluation |
| 178 | +``` |
| 179 | + |
| 180 | +## Example Directory |
| 181 | + |
| 182 | +### 🧮 Mathematical Optimization |
| 183 | + |
| 184 | +#### [Function Minimization](function_minimization/) |
| 185 | +**Task:** Find global minimum of complex non-convex function |
| 186 | +**Achievement:** Evolved from random search to sophisticated simulated annealing |
| 187 | +**Key Lesson:** Shows automatic discovery of optimization algorithms |
| 188 | +```bash |
| 189 | +cd examples/function_minimization |
| 190 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml |
| 191 | +``` |
| 192 | + |
| 193 | +#### [Circle Packing](circle_packing/) |
| 194 | +**Task:** Pack 26 circles in unit square to maximize sum of radii |
| 195 | +**Achievement:** Matched AlphaEvolve paper results (2.634/2.635) |
| 196 | +**Key Lesson:** Demonstrates evolution from geometric heuristics to mathematical optimization |
| 197 | +```bash |
| 198 | +cd examples/circle_packing |
| 199 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config_phase_1.yaml |
| 200 | +``` |
| 201 | + |
| 202 | +### 🔧 Algorithm Discovery |
| 203 | + |
| 204 | +#### [Signal Processing](signal_processing/) |
| 205 | +**Task:** Design digital filters for audio processing |
| 206 | +**Achievement:** Discovered novel filter designs with superior characteristics |
| 207 | +**Key Lesson:** Shows evolution of domain-specific algorithms |
| 208 | +```bash |
| 209 | +cd examples/signal_processing |
| 210 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml |
| 211 | +``` |
| 212 | + |
| 213 | +#### [Rust Adaptive Sort](rust_adaptive_sort/) |
| 214 | +**Task:** Create sorting algorithm that adapts to data patterns |
| 215 | +**Achievement:** Evolved sorting strategies beyond traditional algorithms |
| 216 | +**Key Lesson:** Multi-language support (Rust) and algorithm adaptation |
| 217 | +```bash |
| 218 | +cd examples/rust_adaptive_sort |
| 219 | +python ../../openevolve-run.py initial_program.rs evaluator.py --config config.yaml |
| 220 | +``` |
| 221 | + |
| 222 | +### 🚀 Performance Optimization |
| 223 | + |
| 224 | +#### [MLX Metal Kernel Optimization](mlx_metal_kernel_opt/) |
| 225 | +**Task:** Optimize attention mechanisms for Apple Silicon |
| 226 | +**Achievement:** 2-3x speedup over baseline implementation |
| 227 | +**Key Lesson:** Hardware-specific optimization and performance tuning |
| 228 | +```bash |
| 229 | +cd examples/mlx_metal_kernel_opt |
| 230 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml |
| 231 | +``` |
| 232 | + |
| 233 | +### 🌐 Web and Data Processing |
| 234 | + |
| 235 | +#### [Web Scraper with optillm](web_scraper_optillm/) |
| 236 | +**Task:** Extract API documentation from HTML pages |
| 237 | +**Achievement:** Demonstrates optillm integration with readurls and MoA |
| 238 | +**Key Lesson:** Shows integration with LLM proxy systems and test-time compute |
| 239 | +```bash |
| 240 | +cd examples/web_scraper_optillm |
| 241 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml |
| 242 | +``` |
| 243 | + |
| 244 | +### 💻 Programming Challenges |
| 245 | + |
| 246 | +#### [Online Judge Programming](online_judge_programming/) |
| 247 | +**Task:** Solve competitive programming problems |
| 248 | +**Achievement:** Automated solution generation and submission |
| 249 | +**Key Lesson:** Integration with external evaluation systems |
| 250 | +```bash |
| 251 | +cd examples/online_judge_programming |
| 252 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml |
| 253 | +``` |
| 254 | + |
| 255 | +### 📊 Machine Learning and AI |
| 256 | + |
| 257 | +#### [LLM Prompt Optimization](llm_prompt_optimazation/) |
| 258 | +**Task:** Evolve prompts for better LLM performance |
| 259 | +**Achievement:** Discovered effective prompt engineering techniques |
| 260 | +**Key Lesson:** Self-improving AI systems and prompt evolution |
| 261 | +```bash |
| 262 | +cd examples/llm_prompt_optimazation |
| 263 | +python ../../openevolve-run.py initial_prompt.txt evaluator.py --config config.yaml |
| 264 | +``` |
| 265 | + |
| 266 | +#### [LM-Eval Integration](lm_eval/) |
| 267 | +**Task:** Integrate with language model evaluation harness |
| 268 | +**Achievement:** Automated benchmark improvement |
| 269 | +**Key Lesson:** Integration with standard ML evaluation frameworks |
| 270 | + |
| 271 | +#### [Symbolic Regression](symbolic_regression/) |
| 272 | +**Task:** Discover mathematical expressions from data |
| 273 | +**Achievement:** Automated discovery of scientific equations |
| 274 | +**Key Lesson:** Scientific discovery and mathematical modeling |
| 275 | + |
| 276 | +### 🔬 Scientific Computing |
| 277 | + |
| 278 | +#### [R Robust Regression](r_robust_regression/) |
| 279 | +**Task:** Develop robust statistical regression methods |
| 280 | +**Achievement:** Novel statistical algorithms resistant to outliers |
| 281 | +**Key Lesson:** Multi-language support (R) and statistical algorithm evolution |
| 282 | +```bash |
| 283 | +cd examples/r_robust_regression |
| 284 | +python ../../openevolve-run.py initial_program.r evaluator.py --config config.yaml |
| 285 | +``` |
| 286 | + |
| 287 | +### 🎯 Advanced Features |
| 288 | + |
| 289 | +#### [Circle Packing with Artifacts](circle_packing_with_artifacts/) |
| 290 | +**Task:** Circle packing with detailed execution feedback |
| 291 | +**Achievement:** Advanced debugging and artifact collection |
| 292 | +**Key Lesson:** Using OpenEvolve's artifact system for detailed analysis |
| 293 | +```bash |
| 294 | +cd examples/circle_packing_with_artifacts |
| 295 | +python ../../openevolve-run.py initial_program.py evaluator.py --config config_phase_1.yaml |
| 296 | +``` |
| 297 | + |
| 298 | +## Best Practices |
| 299 | + |
| 300 | +### 🎯 Design Effective Evaluators |
| 301 | +- Use meaningful metrics that reflect your goals |
| 302 | +- Include both quality and efficiency measures |
| 303 | +- Handle edge cases and errors gracefully |
| 304 | +- Provide informative feedback for debugging |
| 305 | + |
| 306 | +### 🔧 Configuration Tuning |
| 307 | +- Start with smaller populations and fewer iterations for testing |
| 308 | +- Increase `num_islands` for more diverse exploration |
| 309 | +- Adjust `temperature` based on how creative you want the LLM to be |
| 310 | +- Set appropriate timeouts for your compute environment |
| 311 | + |
| 312 | +### 📈 Evolution Strategy |
| 313 | +- Use multiple phases with different configurations |
| 314 | +- Begin with exploration, then focus on exploitation |
| 315 | +- Consider cascade evaluation for expensive tests |
| 316 | +- Monitor progress and adjust configuration as needed |
| 317 | + |
| 318 | +### 🐛 Debugging |
| 319 | +- Check logs in `openevolve_output/logs/` |
| 320 | +- Examine failed programs in checkpoint directories |
| 321 | +- Use artifacts to understand program behavior |
| 322 | +- Test your evaluator independently before evolution |
| 323 | + |
| 324 | +## Getting Help |
| 325 | + |
| 326 | +- 📖 See individual example READMEs for detailed walkthroughs |
| 327 | +- 🔍 Check the main [OpenEvolve documentation](../README.md) |
| 328 | +- 💬 Open issues on the [GitHub repository](https://github.com/codelion/openevolve) |
| 329 | + |
| 330 | +Each example is self-contained and includes all necessary files to get started. Pick an example similar to your use case and adapt it to your specific problem! |
0 commit comments