Skip to content

Commit 3b115d2

Browse files
committed
Update README.md
1 parent 87c449a commit 3b115d2

File tree

1 file changed

+3
-8
lines changed
  • examples/mlx_finetuning_optimization

1 file changed

+3
-8
lines changed

examples/mlx_finetuning_optimization/README.md

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,9 @@ python integration_example.py --context # Context manager usage
134134
### Custom Models
135135
The optimizations work with any MLX-compatible model:
136136
```python
137-
trainer = create_optimized_trainer("microsoft/DialoGPT-medium")
138-
trainer = create_optimized_trainer("mistralai/Mistral-7B-v0.1")
137+
trainer = create_optimized_trainer("mlx-community/Llama-3.2-1B-Instruct-bf16")
138+
trainer = create_optimized_trainer("mlx-community/gemma-3-1b-it-bf16")
139+
trainer = create_optimized_trainer("mlx-community/Qwen3-0.6B-bf16")
139140
```
140141

141142
## ✅ Production Ready
@@ -144,9 +145,3 @@ trainer = create_optimized_trainer("mistralai/Mistral-7B-v0.1")
144145
- **Training convergence** preserved with identical final loss
145146
- **Memory safety** ensured with proper error handling
146147
- **Multiple model sizes** tested and validated
147-
148-
## 🎯 Summary
149-
150-
OpenEvolve demonstrates how AI-driven optimization can discover performance improvements that human engineers might miss. The **17.3x speedup** opens new possibilities for efficient ML training on consumer hardware.
151-
152-
**Get started**: `from mlx_optimization_patch import apply_optimizations; apply_optimizations(trainer)`

0 commit comments

Comments
 (0)