BlossomTuneLLM-MLX: Federated LLM Fine-Tuning with Flower, natively on Apple Silicon #404
mrs83
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello mlx-lm community, I'm happy to share a project I've been working on that combines the power of mlx-lm with the world of federated learning:
GitHub Repo: BlossomTuneLLM-MLX
The core idea was to answer a simple question: how can we enable multiple Mac users to collaboratively fine-tune a language model without ever sharing their private data?
This is particularly relevant for creating specialized models in privacy-sensitive domains (like healthcare or legal tech) or for research that requires diverse, real-world data.
By combining mlx-lm with a federated learning framework, we can leverage the hardware people already own, reducing the reliance on expensive GPUs and promoting a more sustainable and accessible approach to language models.
This project is the MLX-native evolution of an earlier codebase for FlowerTune LLM, BlossomTuneLLM
How it Works: mlx-lm + Flower
The server only ever sees the aggregated model updates, and private data never leaves the device (supernode).
Flower made it easy to run a full simulation (with a centralized HF dataset, partitioned using flower-datasets) on a single machine to test the whole process in action, fine-tuning SmolLM2-Instruct (135M).
All you need is a Mac with Apple Silicon. Or multiple ones :) we're just getting started and would love to get feedback, suggestions, and contributions from the MLX-LM community.
Thanks for reading!
Beta Was this translation helpful? Give feedback.
All reactions