You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-17Lines changed: 6 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -185,7 +185,7 @@ See the [Configuration Guide](configs/default_config.yaml) for a full list of op
185
185
186
186
## Artifacts Channel
187
187
188
-
OpenEvolve includes an **artifacts side-channel** that allows evaluators to capture build errors, profiling results, etc. to provide better feedback to the LLM in subsequent generations. This feature enhances the evolution process by giving the LLM context about what went wrong and how to fix it.
188
+
OpenEvolve includes a **artifacts side-channel** that allows evaluators to capture build errors, profiling results, etc. to provide better feedback to the LLM in subsequent generations. This feature enhances the evolution process by giving the LLM context about what went wrong and how to fix it.
189
189
190
190
The artifacts channel operates alongside the traditional fitness metrics.
191
191
@@ -205,28 +205,17 @@ return EvaluationResult(
205
205
```
206
206
207
207
The next generation prompt will include:
208
-
```markdown
208
+
```
209
209
## Last Execution Output
210
210
### Stderr
211
+
```
211
212
SyntaxError: invalid syntax (line 15)
212
-
213
+
```
213
214
### Traceback
215
+
```
214
216
...
215
217
```
216
-
217
-
## Example: LLM Feedback
218
-
219
-
An example for an LLM artifact side channel is part of the default evaluation prompt template, which ends with
220
-
```markdown
221
-
Return your evaluation as a JSON object with the following format:
222
-
{{
223
-
"readability": [score],
224
-
"maintainability": [score],
225
-
"efficiency": [score],
226
-
"reasoning": "[brief explanation of scores]"
227
-
}}
228
218
```
229
-
The non-float values, in this case the "reasoning" key of the json response that the evaluator LLM generates, will be available within the next generation prompt.
230
219
231
220
### Configuration
232
221
@@ -251,7 +240,7 @@ export ENABLE_ARTIFACTS=false
251
240
### Benefits
252
241
253
242
-**Faster convergence** - LLMs can see what went wrong and fix it directly
254
-
- **Better error handling** - Compilation and runtime failures become learning opportunities
243
+
-**Better error handling** - Compilation and runtime failures become learning opportunities
255
244
-**Rich debugging context** - Full stack traces and error messages guide improvements
256
245
-**Zero overhead** - When disabled, no performance impact on evaluation
0 commit comments