-
Notifications
You must be signed in to change notification settings - Fork 7
[Ecosystem] Unsloth #55
Description
Contact emails
daniel@unsloth.ai, michael@unsloth.ai
Project summary
Fine-tuning & Reinforcement Learning for LLMs. Train LLMs, VLMs 2x faster with 70% less VRAM
Project description
- We're one of the most popular fine-tuning, training & reinforcement learning libraries! (51K Github stars)
- Unsloth makes training LLMs, VLMs, Embeddings and audio TTS, OCR models 2x faster and use 70% less VRAM.
- We also contribute bug fixes in OSS models like Gemma, Qwen, Mistral, Llama, GPT-OSS and more.
- We have over 150 million total model downloads via https://huggingface.co/unsloth (4th largest overall), and over 30K-40K Pip downloads per day for our package!
- We collab directly with TorchAO, OpenEnv, ExecuTorch, and the entire PyTorch team on bringing the latest features of PyTorch to everyone!
Are there any other projects in the PyTorch Ecosystem similar to yours? If, yes, what are they?
- Transformers / Hugging Face: https://landscape.pytorch.org/?item=modeling--language--transformers
- Lightning AI / LitGPT: 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale. https://landscape.pytorch.org/?item=modeling--language--litgpt
Project repo URL
https://github.com/unslothai/unsloth
Additional repos in scope of the application
No response
Project license
Apache-2.0 license
GitHub handles of the project maintainer(s)
danielhanchen, shimmyshimmer, Datta0, mmathew23
Is there a corporate or academic entity backing this project? If so, please provide the name and URL of the entity.
Unsloth AI Inc., https://unsloth.ai/
Website URL
Documentation
- Yes! Docs: https://unsloth.ai/docs
- We also maintain lots of notebooks at https://unsloth.ai/docs/get-started/unsloth-notebooks
- We did a RL Mini Summit with OpenEnv, Hugging Face which is a step by step guide to RL: https://www.youtube.com/watch?v=jMSCJZAEYR8
How do you build and test the project today (continuous integration)? Please describe.
We test it manually for now - 30 notebooks on GCP every few days.
When new models come, we have to test immediately.
We also test once Gitthub, Discord, Reddit issues are resolved.
We're working with AMD, NVIDIA on streamlining CI CD
Version of PyTorch
We support 2.4.0 until 2.11 nightly.
We test currently on 2.9.1 and 2.10.0
Components of PyTorch
Everything, since we train LLMs! We use PyTorch directly:
- torch.nn.*
- torch.nn.functional.*
- General torch functions and nearly all of PyTorch!
For example https://github.com/search?q=repo%3Aunslothai%2Funsloth%20%22torch%22&type=code shows 90 files using PyTorch, over 1,000 GitHub issues mentioning PyTorch, 130 PRs using PyTorch!
How long do you expect to maintain the project?
Forever :) This is our dream project!
Concretely > 20 years!
Additional information
We also collab with:
- TorchAO https://pytorch.org/blog/torchao-quantized-models-and-quantization-recipes-now-available-on-huggingface-hub/
- ExecuTorch: https://pytorch.org/blog/torchao-quantized-models-and-quantization-recipes-now-available-on-huggingface-hub/
- Part of the PyTorch Conference 2025: https://pytorch.org/blog/torchao-quantized-models-and-quantization-recipes-now-available-on-huggingface-hub/
- PyTorch at Neurips 2025: https://pytorch.org/blog/pytorch-foundation-at-neurips-2025/
- OpenEnv: https://huggingface.co/blog/openenv
- GPU Mode, OpenEnv Mini RL Conference: https://www.youtube.com/watch?v=jMSCJZAEYR8
- PyTorch generally: https://www.youtube.com/watch?v=MQwryfkydc0
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Status