You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/overview/intro.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ pip install xturing
29
29
30
30
**Welcome to xTuring: Personalize AI your way**
31
31
32
-
In the world of AI, personalization is incredibly important for making AI truly powerful. This is where xTuring comes in – it's a special open-source software that helps you make AI models, called Large Language Models (LLMs), work exactly the way you want them to.
32
+
In the world of AI, personalization is incredibly important for making AI truly powerful. This is where xTuring comes in – it's a special open-source software that helps you make AI models, called Large Language Models (LLMs), work exactly the way you want them to.
33
33
34
34
What's great about xTuring is that it's super easy to use. It has a simple interface that's designed to help you customize LLMs for your specific needs, whether it's for your own data or applications. Basically, xTuring gives you complete control over personalizing AI, making it work just the way you need it to.
35
35
@@ -53,16 +53,16 @@ To get started with xTuring, check out the [Quickstart](/overview/quickstart) gu
53
53
54
54
| Model | Examples |
55
55
| --- | --- |
56
-
| Bloom |[Bloom fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/bloom)|
57
-
| Cerebras-GPT |[Cerebras-GPT fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/cerebras)|
58
-
| Falcon |[Falcon 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/falcon)|
59
-
| Galactica |[Galactica fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/galactica)|
60
-
| Generic Wrapper |[Any large language model fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/generic)|
| GPT-2 |[GPT-2 fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/gpt2)|
63
-
| LLaMA |[LLaMA 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/llama)|
64
-
| LLaMA 2 |[LLaMA 2 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/llama2)|
65
-
| OPT |[OPT fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/opt)|
56
+
| Bloom |[Bloom fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/bloom)|
57
+
| Cerebras-GPT |[Cerebras-GPT fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/cerebras)|
58
+
| Falcon |[Falcon 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/falcon)|
59
+
| Galactica |[Galactica fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/galactica)|
60
+
| Generic Wrapper |[Any large language model fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/generic)|
| GPT-2 |[GPT-2 fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/gpt2)|
63
+
| LLaMA |[LLaMA 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/llama)|
64
+
| LLaMA 2 |[LLaMA 2 7B fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/llama2)|
65
+
| OPT |[OPT fine-tuning on Alpaca dataset with/without LoRA and with/without INT8](https://github.com/stochasticai/xturing/tree/main/examples/models/opt)|
66
66
67
67
xTuring is licensed under [Apache 2.0](https://github.com/stochasticai/xturing/blob/main/LICENSE)
68
68
@@ -85,4 +85,4 @@ The people who created xTuring come from a place called Stochastic, where lots o
85
85
86
86
**Here to Help You Succeed**: Our job doesn't stop with making xTuring. We're here to help you learn and use AI in the best way possible. We want you to feel confident using our tool in the fast-changing world of AI.
87
87
88
-
[Come Work with Us](/contributing) and be part of the future of AI with xTuring. We're all about new ideas and making AI better for everyone. We're here to help you every step of the way.
88
+
[Come Work with Us](/contributing) and be part of the future of AI with xTuring. We're all about new ideas and making AI better for everyone. We're here to help you every step of the way.
0 commit comments