You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MUST NOT CHANGE: ❌ Function signatures ❌ Algorithm correctness ❌ External API
674
+
ALLOWED: ✅ Internal implementation ✅ Data structures ✅ Performance optimizations
685
675
```
686
676
687
677
</details>
688
678
689
679
<details>
690
680
<summary><b>🔬 Advanced Techniques</b></summary>
691
681
692
-
**Artifact-Driven Iteration:**
693
-
694
-
- Enable artifacts in your config
695
-
- Include common error patterns in system message
696
-
- Add guidance based on stderr/warning patterns
682
+
**Artifact-Driven Iteration:** Enable artifacts in config → Include common error patterns in system message → Add guidance based on stderr/warning patterns
697
683
698
-
**Multi-Phase Evolution:**
684
+
**Multi-Phase Evolution:** Start broad ("Explore different algorithmic approaches"), then focus ("Given successful simulated annealing, focus on parameter tuning")
699
685
700
-
```yaml
701
-
# Phase 1: Broad exploration
702
-
system_message: "Explore different algorithmic approaches..."
703
-
704
-
# Phase 2: Focused optimization
705
-
system_message: "Given the successful simulated annealing approach,
706
-
focus on parameter tuning and cooling schedules..."
707
-
```
708
-
709
-
**Template Stochasticity:**
710
-
711
-
```yaml
712
-
prompt:
713
-
template_dir: "custom_templates/"
714
-
use_template_stochasticity: true
715
-
template_variations:
716
-
greeting:
717
-
- "Let's optimize this code:"
718
-
- "Time to enhance:"
719
-
- "Improving:"
720
-
# Then use {greeting} in your templates to get random variations
721
-
```
686
+
**Template Stochasticity:** See the [Configuration section](#-configuration) for complete template variation examples.
722
687
723
688
</details>
724
689
725
690
### Meta-Evolution: Using OpenEvolve to Optimize Prompts
726
691
727
-
**You can use OpenEvolve to evolve your system messages themselves!**
728
-
729
-
```yaml
730
-
# Example: Evolve prompts for HotpotQA dataset
731
-
Initial Prompt: "Answer the question based on the context."
732
-
733
-
Evolved Prompt: "As an expert analyst, carefully examine the provided context.
734
-
Break down complex multi-hop reasoning into clear steps. Cross-reference
735
-
information from multiple sources to ensure accuracy. Answer: [question]"
736
-
737
-
Result: +23% accuracy improvement on HotpotQA benchmark
738
-
```
692
+
**You can use OpenEvolve to evolve your system messages themselves!** This powerful technique lets you optimize prompts for better LLM performance automatically.
739
693
740
-
See the [LLM Prompt Optimization example](examples/llm_prompt_optimization/) for a complete implementation.
694
+
See the [LLM Prompt Optimization example](examples/llm_prompt_optimization/) for a complete implementation, including the HotpotQA case study with +23% accuracy improvement.
741
695
742
696
### Common Pitfalls to Avoid
743
697
@@ -825,20 +779,7 @@ Want to contribute? Check out our [roadmap discussions](https://github.com/codel
825
779
<details>
826
780
<summary><b>💰 How much does it cost to run?</b></summary>
827
781
828
-
**Cost depends on your LLM provider and iterations:**
829
-
830
-
-**o3**: ~$0.15-0.60 per iteration (depending on code size)
831
-
-**o3-mini**: ~$0.03-0.12 per iteration (more cost-effective)
832
-
-**Gemini-2.5-Pro**: ~$0.08-0.30 per iteration
833
-
-**Gemini-2.5-Flash**: ~$0.01-0.05 per iteration (fastest and cheapest)
834
-
-**Local models**: Nearly free after setup
835
-
-**OptiLLM**: Use cheaper models with test-time compute for better results
836
-
837
-
**Cost-saving tips:**
838
-
- Start with fewer iterations (100-200)
839
-
- Use o3-mini, Gemini-2.5-Flash or local models for exploration
840
-
- Use cascade evaluation to filter bad programs early
841
-
- Configure smaller population sizes initially
782
+
See the [Cost Estimation](#cost-estimation) section in Installation & Setup for detailed pricing information and cost-saving tips.
842
783
843
784
</details>
844
785
@@ -929,7 +870,7 @@ We welcome contributions! Here's how to get started:
929
870
930
871
**Articles & Blog Posts About OpenEvolve**:
931
872
-[Towards Open Evolutionary Agents](https://huggingface.co/blog/driaforall/towards-open-evolutionary-agents) - Evolution of coding agents and the open-source movement
932
-
-[OpenEvolve: GPU Kernel Discovery](https://huggingface.co/blog/codelion/openevolve-gpu-kernel-discovery) - Automated discovery of optimized GPU kernels with 2-3x speedups
-[OpenEvolve: Evolutionary Coding with LLMs](https://huggingface.co/blog/codelion/openevolve) - Introduction to evolutionary algorithm discovery using large language models
0 commit comments