You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
output = model.generate(texts=["Why LLM models are becoming so important?"])
61
+
62
+
print("Generated output by the model: {}".format(output))
63
+
```
64
+
65
+
You can find the data folder [here](examples/models/llama/alpaca_data).
66
+
67
+
<br>
68
+
36
69
## 🌟 What's new?
37
70
We are excited to announce the latest enhancements to our `xTuring` library:
38
71
1.__`LLaMA 2` integration__ - You can use and fine-tune the _`LLaMA 2`_ model in different configurations: _off-the-shelf_, _off-the-shelf with INT8 precision_, _LoRA fine-tuning_, _LoRA fine-tuning with INT8 precision_ and _LoRA fine-tuning with INT4 precision_ using the `GenericModel` wrapper and/or you can use the `Llama2` class from `xturing.models` to test and finetune the model.
@@ -45,7 +78,7 @@ from xturing.models import BaseModel
45
78
model = BaseModel.create('llama2')
46
79
47
80
```
48
-
2.__`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://towardsdatascience.com/perplexity-in-language-models-87a196019a94).
81
+
2.__`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://en.wikipedia.org/wiki/Perplexity).
49
82
```python
50
83
# Make the necessary imports
51
84
from xturing.datasets import InstructionDataset
@@ -118,38 +151,6 @@ For an extended insight, consider examining the [GenericModel working example](e
0 commit comments