Skip to content

Commit 5119c95

Browse files
committed
Update README.md
1 parent fc317e6 commit 5119c95

File tree

1 file changed

+0
-27
lines changed

1 file changed

+0
-27
lines changed

README.md

Lines changed: 0 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -161,33 +161,6 @@ See the [Configuration Guide](configs/default_config.yaml) for a full list of op
161161

162162
See the `examples/` directory for complete examples of using OpenEvolve on various problems:
163163

164-
### 🚀 MLX Fine-tuning Optimization (NEW!)
165-
166-
**OpenEvolve discovered a 17.3x speedup for MLX fine-tuning on Apple Silicon!** This example demonstrates how evolutionary programming can automatically discover performance optimizations that exceed what human engineers typically achieve.
167-
168-
[Explore the MLX Fine-tuning Optimization Example](examples/mlx_finetuning_optimization/)
169-
170-
**Breakthrough Results Achieved:**
171-
- **17.3x faster training throughput** (120 → 2,207 tokens/sec)
172-
- **9.4x better memory efficiency** (0.075 → 0.78 tokens/sec/MB)
173-
- **65% faster training completion** (65.8s → 23.2s)
174-
- **6.4x more data processed** in the same time
175-
176-
**Key AI-Discovered Optimizations:**
177-
- Block-diagonal chunked attention (reduces memory complexity)
178-
- True sequence packing (eliminates padding waste)
179-
- Aggressive fp16 gradient accumulation (50% memory savings)
180-
- Coordinated 256-token chunking (Apple Silicon optimized)
181-
- Ultra-frequent garbage collection (prevents memory pressure)
182-
183-
**Ready-to-Use Integration:**
184-
```python
185-
from mlx_optimization_patch import apply_optimizations
186-
apply_optimizations(your_trainer) # One line. 17x speedup.
187-
```
188-
189-
This example parallels AlphaEvolve's Gemini kernel optimization work, where AI discovered a 23% speedup for Google's production training systems. Our MLX optimizations achieve even more dramatic improvements specifically for Apple Silicon fine-tuning.
190-
191164
### Symbolic Regression
192165

193166
A comprehensive example demonstrating OpenEvolve's application to symbolic regression tasks using the LLM-SRBench benchmark. This example shows how OpenEvolve can evolve simple mathematical expressions (like linear models) into complex symbolic formulas that accurately fit scientific datasets.

0 commit comments

Comments
 (0)