You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .github/CONTRIBUTING.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
## Contributing to InternLM
2
2
3
-
Welcome to the xTuner community! All kinds of contributions are welcomed, including but not limited to
3
+
Welcome to the XTuner community! All kinds of contributions are welcomed, including but not limited to
4
4
5
5
**Fix bug**
6
6
@@ -27,7 +27,7 @@ If you're not familiar with Pull Request, don't worry! The following guidance wi
27
27
28
28
#### 1. Fork and clone
29
29
30
-
If you are posting a pull request for the first time, you should fork the xTuner repository by clicking the **Fork** button in the top right corner of the GitHub page, and the forked repository will appear under your GitHub profile.
30
+
If you are posting a pull request for the first time, you should fork the XTuner repository by clicking the **Fork** button in the top right corner of the GitHub page, and the forked repository will appear under your GitHub profile.
You should configure [pre-commit](https://pre-commit.com/#intro) in the local development environment to make sure the code style matches that of InternLM. **Note**: The following code should be executed under the xTuner directory.
59
+
You should configure [pre-commit](https://pre-commit.com/#intro) in the local development environment to make sure the code style matches that of InternLM. **Note**: The following code should be executed under the XTuner directory.
60
60
61
61
```shell
62
62
pip install -U pre-commit
@@ -101,7 +101,7 @@ git pull upstream master
101
101
102
102
#### 4. Commit the code and pass the unit test
103
103
104
-
-xTuner introduces mypy to do static type checking to increase the robustness of the code. Therefore, we need to add Type Hints to our code and pass the mypy check. If you are not familiar with Type Hints, you can refer to [this tutorial](https://docs.python.org/3/library/typing.html).
104
+
-XTuner introduces mypy to do static type checking to increase the robustness of the code. Therefore, we need to add Type Hints to our code and pass the mypy check. If you are not familiar with Type Hints, you can refer to [this tutorial](https://docs.python.org/3/library/typing.html).
105
105
106
106
- The committed code should pass through the unit test
107
107
@@ -151,7 +151,7 @@ Find more details about Pull Request description in [pull request guidelines](#p
xTuner will run unit test for the posted Pull Request on different platforms (Linux, Window, Mac), based on different versions of Python, PyTorch, CUDA to make sure the code is correct. We can see the specific test information by clicking `Details` in the above image so that we can modify the code.
154
+
XTuner will run unit test for the posted Pull Request on different platforms (Linux, Window, Mac), based on different versions of Python, PyTorch, CUDA to make sure the code is correct. We can see the specific test information by clicking `Details` in the above image so that we can modify the code.
155
155
156
156
(3) If the Pull Request passes the CI, then you can wait for the review from other developers. You'll modify the code based on the reviewer's comments, and repeat the steps [4](#4-commit-the-code-and-pass-the-unit-test)-[5](#5-push-the-code-to-remote) until all reviewers approve it. Then, we will merge it ASAP.
👋 join us on <ahref="https://twitter.com/intern_lm"target="_blank">Twitter</a>, <ahref="https://discord.gg/xa29JuW87d"target="_blank">Discord</a> and <ahref="https://r.vansin.top/?r=internwx"target="_blank">WeChat</a>
12
+
13
13
</div>
14
14
15
-
## 📣 News
15
+
## 🎉 News
16
16
17
-
-**\[2023.08.xx\]**We release xTuner, with multiple fine-tuned adapters.
17
+
-**\[2023.08.xx\]**XTuner is released, with multiple fine-tuned adapters on [HuggingFace](https://huggingface.co/xtuner).
18
18
19
19
## 📖 Introduction
20
20
21
-
xTuner is a toolkit for efficiently fine-tuning LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams.
21
+
XTuner is a toolkit for efficiently fine-tuning LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams.
22
22
23
-
-**Efficiency**: Support LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only 15GB, indicating that users can leverage the free resource, *e.g.*, Colab, to fine-tune their custom LLM models.
24
-
-**Versatile**: Support various **LLMs** ([InternLM](https://github.com/InternLM/InternLM), [Llama2](https://github.com/facebookresearch/llama), [Qwen](https://github.com/QwenLM/Qwen-7B), [Baichuan](https://github.com/baichuan-inc)), **datasets** ([MOSS_003_SFT](https://huggingface.co/datasets/fnlp/moss-003-sft-data), [Arxiv GenTitle](https://github.com/WangRongsheng/ChatGenTitle), [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [oasst1](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Chinese Medical Dialogue](https://github.com/Toyhom/Chinese-medical-dialogue-data/)) and **algorithms** ([QLoRA](http://arxiv.org/abs/2305.14314), [LoRA](http://arxiv.org/abs/2106.09685)), allowing users to choose the most suitable solution for their requirements.
25
-
-**Compatibility**: Compatible with [DeepSpeed](https://github.com/microsoft/DeepSpeed) and the [HuggingFace](https://huggingface.co) training pipeline, enabling effortless integration and utilization.
23
+
-**Efficiency**: Support LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only **8GB**, indicating that users can use nearly any GPU (even the free resource, *e.g.*, Colab) to fine-tune custom LLMs.
24
+
-**Versatile**: Support various **LLMs** ([InternLM](https://github.com/InternLM/InternLM), [Llama2](https://github.com/facebookresearch/llama), [Qwen](https://github.com/QwenLM/Qwen-7B), [Baichuan](https://github.com/baichuan-inc), ...), **datasets** ([MOSS_003_SFT](https://huggingface.co/datasets/fnlp/moss-003-sft-data), [Colorist](https://huggingface.co/datasets/burkelibbey/colors), [Code Alpaca](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K), [Arxiv GenTitle](https://github.com/WangRongsheng/ChatGenTitle), [Chinese Law](https://github.com/LiuHC0428/LAW-GPT), [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), ...) and **algorithms** ([QLoRA](http://arxiv.org/abs/2305.14314), [LoRA](http://arxiv.org/abs/2106.09685)), allowing users to choose the most suitable solution for their requirements.
25
+
-**Compatibility**: Compatible with [DeepSpeed](https://github.com/microsoft/DeepSpeed)🚀 and [HuggingFace](https://huggingface.co) 🤗 training pipeline, enabling effortless integration and utilization.
26
26
27
27
## 🌟 Demos
28
28
29
-
- QLoRA fine-tune for InternLM-7B [](https://colab.research.google.com/drive/1yzGeYXayLomNQjLD4vC6wgUHvei3ezt4?usp=sharing)
30
-
-Chat with Llama2-7B-Plugins [](<>)
31
-
-Integrate xTuner into HuggingFace's pipeline[](https://colab.research.google.com/drive/1eBI9yiOkX-t7P-0-t9vS8y1x5KmWrkoU?usp=sharing)
29
+
- QLoRA Fine-tune [](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
30
+
-Plugin-based Chat [](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
31
+
-Ready-to-use models and datasets from XTuner API[](https://colab.research.google.com/drive/1eBI9yiOkX-t7P-0-t9vS8y1x5KmWrkoU?usp=sharing)
32
32
33
33
## 🔥 Supports
34
34
@@ -42,7 +42,7 @@ xTuner is a toolkit for efficiently fine-tuning LLM, developed by the [MMRazor](
42
42
<b>SFT Datasets</b>
43
43
</td>
44
44
<td>
45
-
<b>Parallel Strategies</b>
45
+
<b>Data Pipelines</b>
46
46
</td>
47
47
<td>
48
48
<b>Algorithms</b>
@@ -51,42 +51,46 @@ xTuner is a toolkit for efficiently fine-tuning LLM, developed by the [MMRazor](
@@ -97,7 +101,7 @@ xTuner is a toolkit for efficiently fine-tuning LLM, developed by the [MMRazor](
97
101
98
102
### Installation
99
103
100
-
Install xTuner with pip
104
+
Install XTuner with pip
101
105
102
106
```shell
103
107
pip install xtuner
@@ -111,7 +115,7 @@ cd xtuner
111
115
pip install -e .
112
116
```
113
117
114
-
### Chat [](<>)
118
+
### Chat [](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
115
119
116
120
<table>
117
121
<tr>
@@ -130,7 +134,7 @@ pip install -e .
130
134
</tr>
131
135
</table>
132
136
133
-
xTuner provides the tools to chat with pretrained / fine-tuned LLMs.
137
+
XTuner provides tools to chat with pretrained / fine-tuned LLMs.
134
138
135
139
- For example, we can start the chat with Llama2-7B-Plugins by
136
140
@@ -140,17 +144,17 @@ xTuner provides the tools to chat with pretrained / fine-tuned LLMs.
140
144
141
145
For more usages, please see [chat.md](./docs/en/chat.md).
142
146
143
-
### Fine-tune [](https://colab.research.google.com/drive/1yzGeYXayLomNQjLD4vC6wgUHvei3ezt4?usp=sharing)
147
+
### Fine-tune [](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
144
148
145
-
xTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs.
149
+
XTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs.
146
150
147
-
-**Step 0**, prepare the config. xTuner provides many ready-to-use configs and we can view all configs by
151
+
-**Step 0**, prepare the config. XTuner provides many ready-to-use configs and we can view all configs by
148
152
149
153
```shell
150
154
xtuner list-cfg
151
155
```
152
156
153
-
Or, if the provided configs cannot meet the requirements, we can copy the provided config to the specified directory and make modifications by
157
+
Or, if the provided configs cannot meet the requirements, please copy the provided config to the specified directory and make specific modifications by
154
158
155
159
```shell
156
160
xtuner copy-cfg ${CONFIG_NAME}${SAVE_DIR}
@@ -160,9 +164,9 @@ xTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs.
For more usages, please see [finetune.md](./docs/en/finetune.md).
@@ -172,13 +176,13 @@ xTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs.
172
176
-**Step 0**, convert the pth adapter to HuggingFace adapter, by
173
177
174
178
```shell
175
-
xtuner convert adapter_pth_2_hf \
179
+
xtuner convert adapter_pth2hf \
176
180
${CONFIG} \
177
181
${PATH_TO_PTH_ADAPTER} \
178
182
${SAVE_PATH_TO_HF_ADAPTER}
179
183
```
180
184
181
-
or, directly merge pth adapter to pretrained LLM, by
185
+
or, directly merge the pth adapter to pretrained LLM, by
182
186
183
187
```shell
184
188
xtuner convert merge_adapter \
@@ -203,13 +207,11 @@ xTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs.
203
207
204
208
### Evaluation
205
209
206
-
- We recommend using [OpenCompass](https://github.com/InternLM/opencompass), a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.
207
-
208
-
## 🔜 Roadmap
210
+
- We recommend using [OpenCompass](https://github.com/InternLM/opencompass), a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.
209
211
210
212
## 🤝 Contributing
211
213
212
-
We appreciate all contributions to xTuner. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
214
+
We appreciate all contributions to XTuner. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
0 commit comments