Skip to content

Commit 6dfcbad

Browse files
authored
Merge pull request #280 from stochasticai/glenn/mixtral
docs: update README.md
2 parents a070ec1 + cad82ae commit 6dfcbad

File tree

1 file changed

+37
-36
lines changed

1 file changed

+37
-36
lines changed

README.md

Lines changed: 37 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
<img src=".github/stochastic_logo_light.svg#gh-light-mode-only" width="250" alt="Stochastic.ai"/>
33
<img src=".github/stochastic_logo_dark.svg#gh-dark-mode-only" width="250" alt="Stochastic.ai"/>
44
</p>
5-
<h3 align="center">Build, customize and control your own personal LLMs</h3>
5+
<h3 align="center">Build, modify, and control your own personalized LLMs</h3>
66

77
<p align="center">
88
<a href="https://pypi.org/project/xturing/">
@@ -15,13 +15,14 @@
1515
<img src="https://img.shields.io/badge/Chat-FFFFFF?logo=discord&style=for-the-badge"/>
1616
</a>
1717
</p>
18+
1819
<br>
1920

2021
___
2122

22-
`xTuring` provides fast, efficient and simple fine-tuning of LLMs, such as LLaMA, GPT-J, Galactica, and more.
23+
`xTuring` provides fast, efficient and simple fine-tuning of open-source LLMs, such as Mistral, LLaMA, GPT-J, and more.
2324
By providing an easy-to-use interface for fine-tuning LLMs to your own data and application, xTuring makes it
24-
simple to build, customize and control LLMs. The entire process can be done inside your computer or in your
25+
simple to build, modify, and control LLMs. The entire process can be done inside your computer or in your
2526
private cloud, ensuring data privacy and security.
2627

2728
With `xTuring` you can,
@@ -33,6 +34,38 @@ With `xTuring` you can,
3334

3435
<br>
3536

37+
## ⚙️ Installation
38+
```bash
39+
pip install xturing
40+
```
41+
42+
<br>
43+
44+
## 🚀 Quickstart
45+
46+
```python
47+
from xturing.datasets import InstructionDataset
48+
from xturing.models import BaseModel
49+
50+
# Load the dataset
51+
instruction_dataset = InstructionDataset("./examples/models/llama/alpaca_data")
52+
53+
# Initialize the model
54+
model = BaseModel.create("llama_lora")
55+
56+
# Finetune the model
57+
model.finetune(dataset=instruction_dataset)
58+
59+
# Perform inference
60+
output = model.generate(texts=["Why LLM models are becoming so important?"])
61+
62+
print("Generated output by the model: {}".format(output))
63+
```
64+
65+
You can find the data folder [here](examples/models/llama/alpaca_data).
66+
67+
<br>
68+
3669
## 🌟 What's new?
3770
We are excited to announce the latest enhancements to our `xTuring` library:
3871
1. __`LLaMA 2` integration__ - You can use and fine-tune the _`LLaMA 2`_ model in different configurations: _off-the-shelf_, _off-the-shelf with INT8 precision_, _LoRA fine-tuning_, _LoRA fine-tuning with INT8 precision_ and _LoRA fine-tuning with INT4 precision_ using the `GenericModel` wrapper and/or you can use the `Llama2` class from `xturing.models` to test and finetune the model.
@@ -45,7 +78,7 @@ from xturing.models import BaseModel
4578
model = BaseModel.create('llama2')
4679

4780
```
48-
2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://towardsdatascience.com/perplexity-in-language-models-87a196019a94).
81+
2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://en.wikipedia.org/wiki/Perplexity).
4982
```python
5083
# Make the necessary imports
5184
from xturing.datasets import InstructionDataset
@@ -118,38 +151,6 @@ For an extended insight, consider examining the [GenericModel working example](e
118151

119152
<br>
120153

121-
## ⚙️ Installation
122-
```bash
123-
pip install xturing
124-
```
125-
126-
<br>
127-
128-
## 🚀 Quickstart
129-
130-
```python
131-
from xturing.datasets import InstructionDataset
132-
from xturing.models import BaseModel
133-
134-
# Load the dataset
135-
instruction_dataset = InstructionDataset("./alpaca_data")
136-
137-
# Initialize the model
138-
model = BaseModel.create("llama_lora")
139-
140-
# Finetune the model
141-
model.finetune(dataset=instruction_dataset)
142-
143-
# Perform inference
144-
output = model.generate(texts=["Why LLM models are becoming so important?"])
145-
146-
print("Generated output by the model: {}".format(output))
147-
```
148-
149-
You can find the data folder [here](examples/models/llama/alpaca_data).
150-
151-
<br>
152-
153154
## CLI playground
154155
<img src=".github/cli-playground.gif" width="80%" style="margin: 0 1%;"/>
155156

0 commit comments

Comments
 (0)