You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The README has been extensively updated to reflect OpenEvolve's evolution beyond AlphaEvolve, detailing advanced features such as scientific reproducibility, sophisticated LLM integration, MAP-Elites, island-based evolution, and multi-language support. The configuration section now includes advanced YAML options, and the examples section has been expanded to showcase a broader range of use cases, including AI integration, systems optimization, and scientific discovery. Documentation improvements provide clearer guidance for new users and highlight the platform's research and practical capabilities.
An open-source implementation of the AlphaEvolve system described in the Google DeepMind paper "AlphaEvolve: A coding agent for scientific and algorithmic discovery" (2025).
3
+
An open-source evolutionary coding agent that began as a faithful implementation of AlphaEvolve and has evolved far beyond it, enabling automated scientific and algorithmic discovery.
4
4
5
5

6
6
7
7
## Overview
8
8
9
-
OpenEvolve is an evolutionary coding agent that uses Large Language Models to optimize code through an iterative process. It orchestrates a pipeline of LLM-based code generation, evaluation, and selection to continuously improve programs for a variety of tasks.
10
-
11
-
Key features:
12
-
- Evolution of entire code files, not just single functions
13
-
- Support for multiple programming languages
14
-
- Supports OpenAI-compatible APIs for any LLM
15
-
- Multi-objective optimization
16
-
- Flexible prompt engineering
17
-
- Distributed evaluation
9
+
OpenEvolve is an evolutionary coding agent that uses Large Language Models to automatically optimize and discover algorithms through iterative improvement. Starting from the AlphaEvolve research, it incorporates advanced features for reproducibility, multi-language support, sophisticated evaluation pipelines, and integration with cutting-edge LLM optimization techniques. It serves as both a research platform for evolutionary AI and a practical tool for automated code optimization.
10
+
11
+
### Key Features
12
+
13
+
OpenEvolve implements a comprehensive evolutionary coding system with:
14
+
15
+
-**Evolutionary Coding Agent**: LLM-guided evolution of entire code files (not just functions)
16
+
-**Distributed Controller Loop**: Asynchronous pipeline coordinating LLMs, evaluators, and databases
17
+
-**Program Database**: Storage and sampling of evolved programs with evaluation metrics
18
+
-**Prompt Sampling**: Context-rich prompts with past programs, scores, and problem descriptions
19
+
-**LLM Ensemble**: Multiple language models working together for code generation
20
+
-**Multi-objective Optimization**: Simultaneous optimization of multiple evaluation metrics
21
+
-**Checkpoint System**: Automatic saving and resuming of evolution state
22
+
23
+
#### 🔬 **Scientific Reproducibility**
24
+
-**Comprehensive Seeding**: Full deterministic reproduction with hash-based component isolation
25
+
-**Default Reproducibility**: Seed=42 by default for immediate reproducible results
26
+
-**Granular Control**: Per-component seeding for LLMs, database, and evaluation pipeline
27
+
28
+
#### 🤖 **Advanced LLM Integration**
29
+
-**Ensemble Sophistication**: Weighted model combinations with intelligent fallback strategies
30
+
-**Test-Time Compute**: Integration with [optillm](https://github.com/codelion/optillm) for Mixture of Agents (MoA) and enhanced reasoning
31
+
-**Universal API Support**: Works with any OpenAI-compatible endpoint (Anthropic, Google, local models)
32
+
-**Plugin Ecosystem**: Support for optillm plugins (readurls, executecode, z3_solver, etc.)
33
+
34
+
#### 🧬 **Evolution Algorithm Innovations**
35
+
-**MAP-Elites Implementation**: Quality-diversity algorithm for balanced exploration/exploitation
36
+
-**Island-Based Evolution**: Multiple populations with periodic migration for diversity maintenance
37
+
-**Inspiration vs Performance**: Sophisticated prompt engineering separating top performers from diverse inspirations
38
+
-**Multi-Strategy Selection**: Elite, diverse, and exploratory program sampling strategies
39
+
40
+
#### 📊 **Evaluation & Feedback Systems**
41
+
-**Artifacts Side-Channel**: Capture build errors, profiling data, and execution feedback for LLM improvement
42
+
-**Cascade Evaluation**: Multi-stage testing with progressive complexity for efficient resource usage
43
+
-**LLM-Based Feedback**: Automated code quality assessment and reasoning capture
44
+
-**Comprehensive Error Handling**: Graceful recovery from evaluation failures with detailed diagnostics
45
+
46
+
#### 🌐 **Multi-Language & Platform Support**
47
+
-**Language Agnostic**: Python, Rust, R, Metal shaders, and more
48
+
-**Platform Optimization**: Apple Silicon GPU kernels, CUDA optimization, CPU-specific tuning
- Deterministic selection with comprehensive seeding
80
+
81
+
3.**Advanced Evaluator Pool**:
82
+
- Multi-stage cascade evaluation
83
+
- Artifact collection for detailed feedback
84
+
- LLM-based code quality assessment
85
+
- Parallel execution with resource limits
29
86
30
-
The controller orchestrates interactions between these components in an asynchronous pipeline, maximizing throughput to evaluate as many candidate solutions as possible.
87
+
4.**Sophisticated Program Database**:
88
+
- MAP-Elites algorithm for quality-diversity balance
A comprehensive example demonstrating evolution from random search to sophisticated simulated annealing.
291
378
292
-
A comprehensive example demonstrating OpenEvolve's application to symbolic regression tasks using the LLM-SRBench benchmark. This example shows how OpenEvolve can evolve simple mathematical expressions (like linear models) into complex symbolic formulas that accurately fit scientific datasets.
379
+
#### [Circle Packing](examples/circle_packing/)
380
+
Our implementation of the circle packing problem. For the n=26 case, we achieve state-of-the-art results matching published benchmarks.
293
381
294
-
[Explore the Symbolic Regression Example](examples/symbolic_regression/)
382
+
Below is the optimal packing found by OpenEvolve after 800 iterations:
295
383
296
-
Key features:
297
-
- Automatic generation of initial programs from benchmark tasks
298
-
- Evolution from simple linear models to complex mathematical expressions
299
-
- Evaluation on physics, chemistry, biology, and material science datasets
300
-
- Competitive results compared to state-of-the-art symbolic regression methods
Our implementation of the circle packing problem from the AlphaEvolve paper. For the n=26 case, where one needs to pack 26 circles in a unit square we also obtain SOTA results.
388
+
#### [Web Scraper with optillm](examples/web_scraper_optillm/)
389
+
Demonstrates integration with [optillm](https://github.com/codelion/optillm) for test-time compute optimization, including:
0 commit comments