You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[2023/09] Medusa won the [Chai Prize Grant](https://twitter.com/tianle_cai/status/1703891335147897341)🎉 The prize will be used as a development bounty for those who help us achieve milestones in our [roadmap](https://github.com/FasterDecoding/Medusa/issues/3)!
11
-
-[2023/09] Medusa v0.1 is released!
10
+
-[2024/1] Medusa technical report is now available on [arXiv](https://arxiv.org/abs/2401.10774). We've added multiple new features, including Medusa-2 recipe for full-model training, self-distillation for adding Medusa to any fine-tuned LLM, etc. The new results show a 2.2-3.6x speedup over the original model on a range of LLMs.
12
11
13
12
---
14
13
## Introduction
@@ -21,7 +20,7 @@ Medusa is a simple framework that democratizes the acceleration techniques for L
21
20
</picture>
22
21
<br>
23
22
<divalign="center"width="80%">
24
-
<em>Medusa on Vicuna-7b.</em>
23
+
<em>Medusa-1 on Vicuna-7b.</em>
25
24
</div>
26
25
<br>
27
26
</div>
@@ -50,19 +49,25 @@ We aim to solve the challenges associated with speculative decoding by implement
50
49
- Instead of introducing a new model, we train multiple decoding heads on the *same* model.
51
50
- The training is parameter-efficient so that even the "GPU-Poor" can do it. And since there is no additional model, there is no need to adjust the distributed computing setup.
52
51
- Relaxing the requirement of matching the distribution of the original model makes the non-greedy generation even faster than greedy decoding.
52
+
53
+
In the initial release, our primary focus is on optimizing Medusa for a batch size of 1—a setting commonly utilized for local model hosting. In this configuration, Medusa delivers approximately a 2x speed increase across a range of Vicuna models. We are actively working to extend Medusa's capabilities by integrating it into additional inference frameworks, with the aim of achieving even greater performance gains and extending Medusa to broader settings.
In this initial release, our primary focus is on optimizing Medusa for a batch size of 1—a setting commonly utilized for local model hosting. In this configuration, Medusa delivers approximately a 2x speed increase across a range of Vicuna models. We are actively working to extend Medusa's capabilities by integrating it into additional inference frameworks, with the aim of achieving even greater performance gains and extending Medusa to broader settings.
60
+
61
+
In the updated version, we add support for full-model training, called Medusa-2 (compared to Medusa-1, which only trains the new heads), which requires a special recipe that adds the speculative prediction ability while keeping the original model's performance.
62
+
63
+
We also add support for self-distillation, which allows us to add Medusa to any fine-tuned LLM without requiring the availability of the original training data.
59
64
60
65
## Contents
61
66
-[Introduction](#introduction)
62
67
-[Contents](#contents)
63
68
-[Installation](#installation)
64
69
-[Method 1: With pip](#method-1-with-pip)
65
-
-[Method 2: From source](#method-2-from-source)
70
+
-[Method 2: From source (recommended)](#method-2-from-source)
66
71
-[Model Weights](#model-weights)
67
72
-[Inference](#inference)
68
73
-[Training](#training)
@@ -71,28 +76,39 @@ In this initial release, our primary focus is on optimizing Medusa for a batch s
71
76
-[Push to Hugging Face Hub](#push-to-hugging-face-hub)
72
77
-[Citation](#citation)
73
78
-[Codebase Guide](#codebase-guide)
79
+
-[Community Adoption](#community-adoption)
74
80
-[Contributing](#contributing)
75
81
-[Acknowledgements](#acknowledgements)
76
82
77
83
## Installation
78
-
### Method 1: With pip
84
+
### Method 1: With pip (may not be the latest version)
We currently support single-GPU inference with a batch size of 1, which is the most common setup for local model hosting. We are actively working to extend Medusa's capabilities by integrating it into other inference frameworks; please don't hesitate to reach out if you are interested in contributing to this effort.
98
114
@@ -103,6 +119,11 @@ CUDA_VISIBLE_DEVICES=0 python -m medusa.inference.cli --model [path of medusa mo
103
119
You can also pass `--load-in-8bit` or `--load-in-4bit` to load the base model in quantized format. If you download the base model elsewhere, you may override base model name or path with `--base-model [path of base model]`.
104
120
105
121
### Training
122
+
In the updated version, we use the amazing [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) library to manage the training process. Please refer to our [fork](https://github.com/ctlllll/axolotl) for the training code. The major code modifications are in [`src/axolotl/utils/models.py`](https://github.com/ctlllll/axolotl/blob/main/src/axolotl/utils/models.py). The training configs can be found in [`examples/medusa`](https://github.com/ctlllll/axolotl/tree/main/examples/medusa).
123
+
124
+
The data preparation code for self-distillation can be found in [`data_generation` folder](data_generation) of the current repo.
125
+
126
+
### Training (legacy)
106
127
For training, please install:
107
128
```bash
108
129
pip install -e ".[train]"
@@ -148,13 +169,11 @@ python -m medusa.hf_utils --folder [path of the model folder] --repo [name of th
148
169
149
170
## Citation
150
171
```bibtex
151
-
@misc{medusa,
152
-
author = {Tianle Cai and Yuhong Li and Zhengyang Geng and Hongwu Peng and Tri Dao},
153
-
title = {Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads},
We are grateful to the authors for their contributions to the community and sincerely hope that Medusa can help accelerate the development of LLMs. If you are using Medusa in your project, please let us know, and we will add your project to the list.
190
+
166
191
## Contributing
167
192
We welcome community contributions to Medusa. If you have an idea for how to improve it, please open an issue to discuss it with us. When submitting a pull request, please ensure that your changes are well-tested. Please split each major change into a separate pull request. We also have a [Roadmap](ROADMAP.md) summarizing our future plans for Medusa. Don't hesitate to reach out if you are interested in contributing to any of the items on the roadmap.
168
193
169
194
## Acknowledgements
170
-
This codebase is influenced by remarkable projects from the LLM community, including [FastChat](https://github.com/lm-sys/FastChat), [TinyChat](https://github.com/mit-han-lab/llm-awq/tree/main/), [vllm](https://github.com/vllm-project/vllm) and many others.
195
+
This codebase is influenced by remarkable projects from the LLM community, including [FastChat](https://github.com/lm-sys/FastChat), [TinyChat](https://github.com/mit-han-lab/llm-awq/tree/main/), [vllm](https://github.com/vllm-project/vllm), [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
196
+
197
+
This project is supported by [Together AI](https://together.ai/), [MyShell AI](https://myshell.ai/), [Chai AI](https://www.chai-research.com/).
We use vLLM to enable batched generation. First, install dependencies:
3
+
```bash
4
+
pip install vllm openai
5
+
```
6
+
7
+
## Start server
8
+
9
+
```bash
10
+
python -m vllm.entrypoints.openai.api_server \
11
+
--model YOUR_MODEL_NAME --port 8000
12
+
```
13
+
You can also start multiple servers with different ports to enable parallel generation. In `generate.py`, we scan the ports from 8000 to 8009 to find available servers. You can modify the code to use other ports.
14
+
15
+
## Generate data
16
+
The following command will let the model to continue the first prompt from each sample in `DATA_PATH`, this is suitable for models that can play both roles in a conversation (e.g., Zephyr 7B). If you want to use all prompts in each sample to repeatly talk to the model, use `--chat` instead. `--chat` mode works for more models but may take longer time to generate due to repeated computation (welcome to contribute a better implementation).
When generated with `--chat`, the output file will follow the ShareGPT format ([example](https://github.com/lm-sys/FastChat/blob/main/data/dummy_conversation.json)).
24
+
You can use the following command to convert the generated text withour `--chat` to the same format:
0 commit comments