You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-2Lines changed: 15 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ An open-source implementation of the AlphaEvolve system described in the Google
9
9
OpenEvolve is an evolutionary coding agent that uses Large Language Models to optimize code through an iterative process. It orchestrates a pipeline of LLM-based code generation, evaluation, and selection to continuously improve programs for a variety of tasks.
10
10
11
11
Key features:
12
+
12
13
- Evolution of entire code files, not just single functions
13
14
- Support for multiple programming languages
14
15
- Supports OpenAI-compatible APIs for any LLM
@@ -34,6 +35,7 @@ The controller orchestrates interactions between these components in an asynchro
- see the branching of your program evolution in a network visualization, with node radius chosen by the program fitness (= the currently selected metric),
149
153
- see the parent-child relationship of nodes and click through them in the sidebar (use the yellow locator icon in the sidebar to center the node in the graph),
150
154
- select the metric of interest (with the available metric choices depending on your data set),
@@ -157,6 +161,7 @@ In the visualization UI, you can
Sample configuration files are available in the `configs/` directory:
187
+
182
188
- `default_config.yaml`: Comprehensive configuration with all available options
183
189
184
190
See the [Configuration Guide](configs/default_config.yaml) for a full list of options.
@@ -205,18 +211,23 @@ return EvaluationResult(
205
211
```
206
212
207
213
The next generation prompt will include:
214
+
208
215
```markdown
209
216
## Last Execution Output
217
+
210
218
### Stderr
219
+
211
220
SyntaxError: invalid syntax (line 15)
212
221
213
222
### Traceback
223
+
214
224
...
215
225
```
216
226
217
227
## Example: LLM Feedback
218
228
219
229
An example for an LLM artifact side channel is part of the default evaluation template, which ends with
230
+
220
231
```markdown
221
232
Return your evaluation as a JSON object with the following format:
222
233
{{
@@ -226,6 +237,7 @@ Return your evaluation as a JSON object with the following format:
226
237
"reasoning": "[brief explanation of scores]"
227
238
}}
228
239
```
240
+
229
241
The non-float values, in this case the "reasoning" key of the json response that the evaluator LLM generates, will be available within the next generation prompt.
230
242
231
243
### Configuration
@@ -239,7 +251,7 @@ evaluator:
239
251
240
252
prompt:
241
253
include_artifacts: true
242
-
max_artifact_bytes: 4096 # 4KB limit in prompts
254
+
max_artifact_bytes: 4096 # 4KB limit in prompts
243
255
artifact_security_filter: true
244
256
```
245
257
@@ -266,6 +278,7 @@ A comprehensive example demonstrating OpenEvolve's application to symbolic regre
266
278
[Explore the Symbolic Regression Example](examples/symbolic_regression/)
267
279
268
280
Key features:
281
+
269
282
- Automatic generation of initial programs from benchmark tasks
270
283
- Evolution from simple linear models to complex mathematical expressions
271
284
- Evaluation on physics, chemistry, biology, and material science datasets
0 commit comments