|
7 | 7 | ------------------------------------------------------------------------------------------ |
8 | 8 |
|
9 | 9 | <p align="center"> |
10 | | - <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a> |
| 10 | + <a href="https://paddlenlp.readthedocs.io/en/latest/?badge=latest"><img src="https://readthedocs.org/projects/paddlenlp/badge/?version=latest"> |
11 | 11 | <a href="https://github.com/PaddlePaddle/PaddleNLP/releases"><img src="https://img.shields.io/github/v/release/PaddlePaddle/PaddleNLP?color=ffa"></a> |
12 | 12 | <a href=""><img src="https://img.shields.io/badge/python-3.7+-aff.svg"></a> |
13 | 13 | <a href=""><img src="https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-pink.svg"></a> |
|
16 | 16 | <a href="https://pypi.org/project/paddlenlp/"><img src="https://img.shields.io/pypi/dm/paddlenlp?color=9cf"></a> |
17 | 17 | <a href="https://github.com/PaddlePaddle/PaddleNLP/issues"><img src="https://img.shields.io/github/issues/PaddlePaddle/PaddleNLP?color=9cc"></a> |
18 | 18 | <a href="https://github.com/PaddlePaddle/PaddleNLP/stargazers"><img src="https://img.shields.io/github/stars/PaddlePaddle/PaddleNLP?color=ccf"></a> |
| 19 | + <a href="./LICENSE"><img src="https://img.shields.io/badge/license-Apache%202-dfd.svg"></a> |
19 | 20 | </p> |
20 | 21 |
|
21 | 22 | <h4 align="center"> |
|
69 | 70 |
|
70 | 71 | 大模型套件高性能推理模块内置动态插入和全环节算子融合策略,极大加快并行推理速度。底层实现细节封装化,实现开箱即用的高性能并行推理能力。 |
71 | 72 |
|
| 73 | +## 文档 |
| 74 | +更多详细文档, 请访问 [PaddleNLP Documentation](https://paddlenlp.readthedocs.io/). |
| 75 | + |
72 | 76 | ------------------------------------------------------------------------------------------ |
73 | 77 |
|
74 | 78 | ## 模型支持 |
|
90 | 94 | | [ChatGLM2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/chatglm2) | THUDM/chatglm2-6b | |
91 | 95 | | [ChatGLM3](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/chatglm2) | THUDM/chatglm3-6b | |
92 | 96 | | [DeepSeekV2](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2) | deepseek-ai/DeepSeek-V2, deepseek-ai/DeepSeek-V2-Chat, deepseek-ai/DeepSeek-V2-Lite, deepseek-ai/DeepSeek-V2-Lite-Chat, deepseek-ai/DeepSeek-Coder-V2-Base, deepseek-ai/DeepSeek-Coder-V2-Instruct, deepseek-ai/DeepSeek-Coder-V2-Lite-Base, deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | |
| 97 | +| [DeepSeekV3](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2) | deepseek-ai/DeepSeek-V3, deepseek-ai/DeepSeek-V3-Base | |
| 98 | +| [DeepSeek-R1](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2) | deepseek-ai/DeepSeek-R1, deepseek-ai/DeepSeek-R1-Zero, deepseek-ai/DeepSeek-R1-Distill-Llama-70B, deepseek-ai/DeepSeek-R1-Distill-Llama-8B, deepseek-ai/DeepSeek-R1-Distill-Qwen-14B, deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B, deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | |
93 | 99 | | [Gemma](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/gemma) | google/gemma-7b, google/gemma-7b-it, google/gemma-2b, google/gemma-2b-it | |
94 | 100 | | [Mistral](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/mistral) | mistralai/Mistral-7B-Instruct-v0.3, mistralai/Mistral-7B-v0.1 | |
95 | 101 | | [Mixtral](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/mixtral) | mistralai/Mixtral-8x7B-Instruct-v0.1 | |
|
130 | 136 |
|
131 | 137 |
|
132 | 138 | | Model | Pretrain | SFT | LoRA | FlashMask | Prefix Tuning | DPO/SimPO/ORPO/KTO | RLHF | Mergekit | Quantization | |
133 | | -|--------------------------------------------|:--------:|:---:|:----:|:---------:|:-------------:|:--------------:|:----:|:-----:|:------------:| |
134 | | -| [Llama](./llm/config/llama) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
135 | | -| [Qwen](./llm/config/qwen) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | |
136 | | -| [Mixtral](./llm/config/mixtral) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
137 | | -| [Mistral](./llm/config/mistral) | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | 🚧 | ✅ | 🚧 | |
138 | | -| [Baichuan/Baichuan2](./llm/config/llama) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | |
139 | | -| [ChatGLM-6B](./llm/config/chatglm) | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | 🚧 | ✅ | ✅ | |
140 | | -| [ChatGLM2/ChatGLM3](./llm/config/chatglm2) | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | 🚧 | ✅ | ✅ | |
141 | | -| [Bloom](./llm/config/bloom) | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | 🚧 | ✅ | ✅ | |
142 | | -| [GPT-3](./llm/config/gpt-3) | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | ✅ | 🚧 | |
143 | | -| [OPT](./llm/config/opt) | ✅ | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | ✅ | 🚧 | |
144 | | -| [Gemma](./llm/config/gemma) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
145 | | -| [Yuan](./llm/config/yuan) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
| 139 | +|--------------------------------------------|:--------:|:---:|:----:|:---------:|:-------------:|:------------------:|:----:|:--------:|:------------:| |
| 140 | +| [Llama](./llm/config/llama) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
| 141 | +| [Qwen](./llm/config/qwen) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | |
| 142 | +| [Mixtral](./llm/config/mixtral) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
| 143 | +| [Mistral](./llm/config/mistral) | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | 🚧 | ✅ | 🚧 | |
| 144 | +| [Baichuan/Baichuan2](./llm/config/llama) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | |
| 145 | +| [ChatGLM-6B](./llm/config/chatglm) | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | 🚧 | ✅ | ✅ | |
| 146 | +| [ChatGLM2/ChatGLM3](./llm/config/chatglm2) | ✅ | ✅ | ✅ | 🚧 | ✅ | ✅ | 🚧 | ✅ | ✅ | |
| 147 | +| [Bloom](./llm/config/bloom) | ✅ | ✅ | ✅ | 🚧 | ✅ | 🚧 | 🚧 | ✅ | ✅ | |
| 148 | +| [GPT-3](./llm/config/gpt-3) | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | ✅ | 🚧 | |
| 149 | +| [OPT](./llm/config/opt) | ✅ | ✅ | ✅ | 🚧 | 🚧 | 🚧 | 🚧 | ✅ | 🚧 | |
| 150 | +| [Gemma](./llm/config/gemma) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
| 151 | +| [Yuan](./llm/config/yuan) | ✅ | ✅ | ✅ | 🚧 | 🚧 | ✅ | 🚧 | ✅ | 🚧 | |
146 | 152 | * [大模型推理](./llm/docs/predict/inference.md)已支持 LLaMA 系列、Qwen 系列、Mistral 系列、ChatGLM 系列、Bloom 系列和 Baichuan 系列,支持 Weight Only INT8及 INT4推理,支持 WAC(权重、激活、Cache KV)进行 INT8、FP8量化的推理,【LLM】模型推理支持列表如下: |
147 | 153 |
|
148 | 154 | | 模型名称/量化类型支持 | FP16/BF16 | WINT8 | WINT4 | INT8-A8W8 | FP8-A8W8 | INT8-A8W8C8 | |
|
0 commit comments