You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
-27Lines changed: 0 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -161,33 +161,6 @@ See the [Configuration Guide](configs/default_config.yaml) for a full list of op
161
161
162
162
See the `examples/` directory for complete examples of using OpenEvolve on various problems:
163
163
164
-
### 🚀 MLX Fine-tuning Optimization (NEW!)
165
-
166
-
**OpenEvolve discovered a 17.3x speedup for MLX fine-tuning on Apple Silicon!** This example demonstrates how evolutionary programming can automatically discover performance optimizations that exceed what human engineers typically achieve.
167
-
168
-
[Explore the MLX Fine-tuning Optimization Example](examples/mlx_finetuning_optimization/)
169
-
170
-
**Breakthrough Results Achieved:**
171
-
- **17.3x faster training throughput** (120 → 2,207 tokens/sec)
from mlx_optimization_patch import apply_optimizations
186
-
apply_optimizations(your_trainer) # One line. 17x speedup.
187
-
```
188
-
189
-
This example parallels AlphaEvolve's Gemini kernel optimization work, where AI discovered a 23% speedup for Google's production training systems. Our MLX optimizations achieve even more dramatic improvements specifically for Apple Silicon fine-tuning.
190
-
191
164
### Symbolic Regression
192
165
193
166
A comprehensive example demonstrating OpenEvolve's application to symbolic regression tasks using the LLM-SRBench benchmark. This example shows how OpenEvolve can evolve simple mathematical expressions (like linear models) into complex symbolic formulas that accurately fit scientific datasets.
0 commit comments