Skip to content

Commit 6878b1c

Browse files
authored
fix cogvlm2 history (#1005)
1 parent 6d55321 commit 6878b1c

22 files changed

+548
-172
lines changed

docs/source/LLM/支持的模型和数据集.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -273,9 +273,9 @@
273273
|phi2-3b|[AI-ModelScope/phi-2](https://modelscope.cn/models/AI-ModelScope/phi-2/summary)|Wqkv|default-generation|✔|✔||coding|[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)|
274274
|phi3-4b-4k-instruct|[LLM-Research/Phi-3-mini-4k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-4k-instruct/summary)|qkv_proj|phi3|✔|✘|transformers>=4.36|general|[microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)|
275275
|phi3-4b-128k-instruct|[LLM-Research/Phi-3-mini-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-128k-instruct/summary)|qkv_proj|phi3|✔|✔|transformers>=4.36|general|[microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)|
276-
|phi3-vision-128k-instruct|[LLM-Research/Phi-3-vision-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary)|qkv_proj|phi3-vl|✘|✘||-|[microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
277-
|phi3_small_128k_instruct|[LLM-Research/Phi-3-small-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-small-128k-instruct/summary)|qkv_proj|phi3|✔|✔|transformers>=4.36|general|[microsoft/Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)|
278-
|phi3_medium_128k_instruct|[LLM-Research/Phi-3-medium-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-medium-128k-instruct/summary)|qkv_proj|phi3|✔|✔|transformers>=4.36|general|[microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)|
276+
|phi3-small-128k-instruct|[LLM-Research/Phi-3-small-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-small-128k-instruct/summary)|qkv_proj|phi3|✔|✔|transformers>=4.36|general|[microsoft/Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)|
277+
|phi3-medium-128k-instruct|[LLM-Research/Phi-3-medium-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-medium-128k-instruct/summary)|qkv_proj|phi3|✔|✔|transformers>=4.36|general|[microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)|
278+
|phi3-vision-128k-instruct|[LLM-Research/Phi-3-vision-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary)|qkv_proj|phi3-vl|✔|✘|transformers>=4.36|multi-modal, vision|[microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
279279
|cogvlm-17b-chat|[ZhipuAI/cogvlm-chat](https://modelscope.cn/models/ZhipuAI/cogvlm-chat/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|✘|✘||multi-modal, vision|[THUDM/cogvlm-chat-hf](https://huggingface.co/THUDM/cogvlm-chat-hf)|
280280
|cogvlm2-19b-chat|[ZhipuAI/cogvlm2-llama3-chinese-chat-19B](https://modelscope.cn/models/ZhipuAI/cogvlm2-llama3-chinese-chat-19B/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|✘|✘||-|[THUDM/cogvlm2-llama3-chinese-chat-19B](https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B)|
281281
|cogvlm2-en-19b-chat|[ZhipuAI/cogvlm2-llama3-chat-19B](https://modelscope.cn/models/ZhipuAI/cogvlm2-llama3-chat-19B/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|✘|✘||-|[THUDM/cogvlm2-llama3-chat-19B](https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B)|

docs/source/Multi-Modal/cogvlm2最佳实践.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -114,14 +114,14 @@ seed_everything(42)
114114

115115
images = ['http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png']
116116
query = '距离各城市多远?'
117-
response, _ = inference(model, template, query, images=images)
117+
response, history = inference(model, template, query, images=images)
118118
print(f'query: {query}')
119119
print(f'response: {response}')
120120

121121
# 流式
122122
query = '距离最远的城市是哪?'
123123
images = images
124-
gen = inference_stream(model, template, query, images=images)
124+
gen = inference_stream(model, template, query, history, images=images)
125125
print_idx = 0
126126
print(f'query: {query}\nresponse: ', end='')
127127
for response, _ in gen:
@@ -134,7 +134,7 @@ print()
134134
query: 距离各城市多远?
135135
response: 距离马踏Mata有14km,距离阳江Yangjiang有62km,距离广州Guangzhou有293km。
136136
query: 距离最远的城市是哪?
137-
response: 距离最远的城市是广州Guangzhou。
137+
response: 距离最远的城市是广州Guangzhou,有293km
138138
"""
139139
```
140140

docs/source/Multi-Modal/index.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,21 @@
22

33
### Multi-Modal最佳实践系列
44

5+
一轮对话可以包含多张图片(或不含图片):
56
1. [Qwen-VL最佳实践](qwen-vl最佳实践.md)
67
2. [Qwen-Audio最佳实践](qwen-audio最佳实践.md)
7-
3. [Llava最佳实践](llava最佳实践.md)
8-
4. [Deepseek-VL最佳实践](deepseek-vl最佳实践.md)
9-
5. [Yi-VL最佳实践.md](yi-vl最佳实践.md)
10-
6. [Internlm2-Xcomposers最佳实践](internlm-xcomposer2最佳实践.md)
11-
7. [MiniCPM-V最佳实践](minicpm-v最佳实践.md), [MiniCPM-V-2最佳实践](minicpm-v-2最佳实践.md), [MiniCPM-V-2.5最佳实践](minicpm-v-2.5最佳实践.md)
12-
8. [CogVLM最佳实践](cogvlm最佳实践.md), [CogVLM2最佳实践](cogvlm2最佳实践.md)
13-
9. [mPLUG-Owl2最佳实践](mplug-owl2最佳实践.md)
14-
10. [InternVL-Chat-V1.5最佳实践](internvl最佳实践.md)
8+
3. [Deepseek-VL最佳实践](deepseek-vl最佳实践.md)
9+
4. [Internlm2-Xcomposers最佳实践](internlm-xcomposer2最佳实践.md)
10+
5. [Phi3-Vision最佳实践](phi3-vision最佳实践.md)
11+
12+
13+
一轮对话只能包含一张图片:
14+
1. [Llava最佳实践](llava最佳实践.md)
15+
2. [Yi-VL最佳实践.md](yi-vl最佳实践.md)
16+
3. [mPLUG-Owl2最佳实践](mplug-owl2最佳实践.md)
17+
4. [InternVL-Chat-V1.5最佳实践](internvl最佳实践.md)
18+
19+
20+
整个对话围绕一张图片:
21+
1. [CogVLM最佳实践](cogvlm最佳实践.md), [CogVLM2最佳实践](cogvlm2最佳实践.md)
22+
2. [MiniCPM-V最佳实践](minicpm-v最佳实践.md), [MiniCPM-V-2最佳实践](minicpm-v-2最佳实践.md), [MiniCPM-V-2.5最佳实践](minicpm-v-2.5最佳实践.md)

docs/source/Multi-Modal/internlm-xcomposer2最佳实践.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ print()
109109
print(f'history: {history}')
110110
"""
111111
query: <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>距离各城市多远?
112-
response: 马鞍山距离阳江62公里,广州距离广州293公里。
112+
response: 马鞍山距离阳江62公里,广州距离广州293公里。
113113
query: 距离最远的城市是哪?
114114
response: 距离最最远的城市是广州,距离广州293公里。
115115
history: [['<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>距离各城市多远?', ' 马鞍山距离阳江62公里,广州距离广州293公里。'], ['距离最远的城市是哪?', ' 距离最远的城市是广州,距离广州293公里。']]
Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
2+
# Phi3-Vision 最佳实践
3+
4+
## 目录
5+
- [环境准备](#环境准备)
6+
- [推理](#推理)
7+
- [微调](#微调)
8+
- [微调后推理](#微调后推理)
9+
10+
11+
## 环境准备
12+
```shell
13+
git clone https://github.com/modelscope/swift.git
14+
cd swift
15+
pip install -e '.[llm]'
16+
```
17+
模型链接:
18+
- phi3-vision-128k-instruct: [https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary](https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary)
19+
20+
21+
## 推理
22+
23+
推理 phi3-vision-128k-instruct:
24+
```shell
25+
# Experimental environment: A10, 3090, V100, ...
26+
# 16GB GPU memory
27+
CUDA_VISIBLE_DEVICES=0 swift infer --model_type phi3-vision-128k-instruct
28+
```
29+
30+
输出: (支持传入本地路径或URL)
31+
```python
32+
"""
33+
<<< Who are you?
34+
I am Phi, an AI developed by Microsoft to assist with providing information, answering questions, and helping users find solutions to their queries. How can I assist you today?
35+
--------------------------------------------------
36+
<<< clear
37+
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png</img><img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png</img>What is the difference between these two pictures?
38+
The first picture shows a group of four cartoon sheep standing in a field, while the second picture is a close-up of a kitten with a blurred background. The main difference between these two pictures is the subject matter and the setting. The first picture features animals that are typically associated with farm life and agriculture, while the second picture focuses on a domestic animal, a kitten, which is more commonly found in households. Additionally, the first picture has a more peaceful and serene atmosphere, while the second picture has a more intimate and detailed view of the kitten.
39+
--------------------------------------------------
40+
<<< clear
41+
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png</img>How many sheep are there in the picture?
42+
There are four sheep in the picture.
43+
--------------------------------------------------
44+
<<< clear
45+
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png</img>What is the result of the calculation?
46+
The result of the calculation 1452 + 45304 is 46756.
47+
--------------------------------------------------
48+
<<< clear
49+
<<< <img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png</img>Write a poem based on the content of the picture.
50+
In the tranquil night, a boat sails,
51+
Through the darkened river, it sets sail.
52+
A single candle flickers, casting light,
53+
Guiding the way through the endless night.
54+
55+
The stars above, like diamonds bright,
56+
Gleam down upon the boat's gentle flight.
57+
The moon, a silent guardian in the sky,
58+
Watches over the boat as it sails by.
59+
60+
The river, a mirror to the night,
61+
Reflects the boat's journey, a beautiful sight.
62+
The trees on either side, standing tall,
63+
Whisper secrets to the boat, one and all.
64+
65+
In the stillness of the night, a sense of peace,
66+
The boat, the river, the trees, all in their place.
67+
A moment frozen in time, a scene so serene,
68+
A journey through the night, a dream so unseen.
69+
"""
70+
```
71+
72+
示例图片如下:
73+
74+
cat:
75+
76+
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png" width="250" style="display: inline-block;">
77+
78+
animal:
79+
80+
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png" width="250" style="display: inline-block;">
81+
82+
math:
83+
84+
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png" width="250" style="display: inline-block;">
85+
86+
poem:
87+
88+
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png" width="250" style="display: inline-block;">
89+
90+
91+
**单样本推理**
92+
93+
```python
94+
import os
95+
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
96+
97+
from swift.llm import (
98+
get_model_tokenizer, get_template, inference, ModelType,
99+
get_default_template_type, inference_stream
100+
)
101+
from swift.utils import seed_everything
102+
import torch
103+
104+
model_type = ModelType.phi3_vision_128k_instruct
105+
template_type = get_default_template_type(model_type)
106+
print(f'template_type: {template_type}')
107+
108+
model, tokenizer = get_model_tokenizer(model_type, torch.float16,
109+
model_kwargs={'device_map': 'auto'})
110+
model.generation_config.max_new_tokens = 256
111+
template = get_template(template_type, tokenizer)
112+
seed_everything(42)
113+
114+
query = """<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>How far is it from each city?"""
115+
response, history = inference(model, template, query)
116+
print(f'query: {query}')
117+
print(f'response: {response}')
118+
119+
# 流式
120+
query = 'Which city is the farthest?'
121+
gen = inference_stream(model, template, query, history)
122+
print_idx = 0
123+
print(f'query: {query}\nresponse: ', end='')
124+
for response, history in gen:
125+
delta = response[print_idx:]
126+
print(delta, end='', flush=True)
127+
print_idx = len(response)
128+
print()
129+
print(f'history: {history}')
130+
"""
131+
query: Which city is the farthest?
132+
response: Guangzhou is the farthest city, located 293km away.
133+
history: [['<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png</img>How far is it from each city?', 'The distances are as follows: Mata is 14km away, Yangjiang is 62km away, and Guangzhou is 293km away.'], ['Which city is the farthest?', 'Guangzhou is the farthest city, located 293km away.']]
134+
"""
135+
```
136+
137+
示例图片如下:
138+
139+
road:
140+
141+
<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png" width="250" style="display: inline-block;">
142+
143+
144+
## 微调
145+
多模态大模型微调通常使用**自定义数据集**进行微调. 这里展示可直接运行的demo:
146+
147+
(默认只对LLM部分的qkv进行lora微调. 如果你想对所有linear含vision模型部分都进行微调, 可以指定`--lora_target_modules ALL`. 支持全参数微调.)
148+
```shell
149+
# Experimental environment: A10, 3090, V100, ...
150+
# 16GB GPU memory
151+
CUDA_VISIBLE_DEVICES=0 swift sft \
152+
--model_type phi3-vision-128k-instruct \
153+
--dataset coco-en-mini \
154+
```
155+
156+
[自定义数据集](../LLM/自定义与拓展.md#-推荐命令行参数的形式)支持json, jsonl样式, 以下是自定义数据集的例子:
157+
158+
(支持多轮对话, 支持每轮对话含多张图片或不含图片, 支持传入本地路径或URL)
159+
160+
```json
161+
[
162+
{"conversations": [
163+
{"from": "user", "value": "<img>img_path</img>11111"},
164+
{"from": "assistant", "value": "22222"}
165+
]},
166+
{"conversations": [
167+
{"from": "user", "value": "<img>img_path</img><img>img_path2</img><img>img_path3</img>aaaaa"},
168+
{"from": "assistant", "value": "bbbbb"},
169+
{"from": "user", "value": "<img>img_path</img>ccccc"},
170+
{"from": "assistant", "value": "ddddd"}
171+
]},
172+
{"conversations": [
173+
{"from": "user", "value": "AAAAA"},
174+
{"from": "assistant", "value": "BBBBB"},
175+
{"from": "user", "value": "CCCCC"},
176+
{"from": "assistant", "value": "DDDDD"}
177+
]}
178+
]
179+
```
180+
181+
182+
## 微调后推理
183+
直接推理:
184+
```shell
185+
CUDA_VISIBLE_DEVICES=0 swift infer \
186+
--ckpt_dir output/phi3-vision-128k-instruct/vx-xxx/checkpoint-xxx \
187+
--load_dataset_config true \
188+
```
189+
190+
**merge-lora**并推理:
191+
```shell
192+
CUDA_VISIBLE_DEVICES=0 swift export \
193+
--ckpt_dir output/phi3-vision-128k-instruct/vx-xxx/checkpoint-xxx \
194+
--merge_lora true --safe_serialization false
195+
196+
CUDA_VISIBLE_DEVICES=0 swift infer \
197+
--ckpt_dir output/phi3-vision-128k-instruct/vx-xxx/checkpoint-xxx-merged \
198+
--load_dataset_config true
199+
```

docs/source_en/LLM/Supported-models-datasets.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -273,9 +273,9 @@ The table below introcudes all models supported by SWIFT:
273273
|phi2-3b|[AI-ModelScope/phi-2](https://modelscope.cn/models/AI-ModelScope/phi-2/summary)|Wqkv|default-generation|&#x2714;|&#x2714;||coding|[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)|
274274
|phi3-4b-4k-instruct|[LLM-Research/Phi-3-mini-4k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-4k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2718;|transformers>=4.36|general|[microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)|
275275
|phi3-4b-128k-instruct|[LLM-Research/Phi-3-mini-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-128k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2714;|transformers>=4.36|general|[microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)|
276-
|phi3-vision-128k-instruct|[LLM-Research/Phi-3-vision-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary)|qkv_proj|phi3-vl|&#x2718;|&#x2718;||-|[microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
277-
|phi3_small_128k_instruct|[LLM-Research/Phi-3-small-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-small-128k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2714;|transformers>=4.36|general|[microsoft/Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)|
278-
|phi3_medium_128k_instruct|[LLM-Research/Phi-3-medium-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-medium-128k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2714;|transformers>=4.36|general|[microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)|
276+
|phi3-small-128k-instruct|[LLM-Research/Phi-3-small-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-small-128k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2714;|transformers>=4.36|general|[microsoft/Phi-3-small-128k-instruct](https://huggingface.co/microsoft/Phi-3-small-128k-instruct)|
277+
|phi3-medium-128k-instruct|[LLM-Research/Phi-3-medium-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-medium-128k-instruct/summary)|qkv_proj|phi3|&#x2714;|&#x2714;|transformers>=4.36|general|[microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)|
278+
|phi3-vision-128k-instruct|[LLM-Research/Phi-3-vision-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-vision-128k-instruct/summary)|qkv_proj|phi3-vl|&#x2714;|&#x2718;|transformers>=4.36|multi-modal, vision|[microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
279279
|cogvlm-17b-chat|[ZhipuAI/cogvlm-chat](https://modelscope.cn/models/ZhipuAI/cogvlm-chat/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|&#x2718;|&#x2718;||multi-modal, vision|[THUDM/cogvlm-chat-hf](https://huggingface.co/THUDM/cogvlm-chat-hf)|
280280
|cogvlm2-19b-chat|[ZhipuAI/cogvlm2-llama3-chinese-chat-19B](https://modelscope.cn/models/ZhipuAI/cogvlm2-llama3-chinese-chat-19B/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|&#x2718;|&#x2718;||-|[THUDM/cogvlm2-llama3-chinese-chat-19B](https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B)|
281281
|cogvlm2-en-19b-chat|[ZhipuAI/cogvlm2-llama3-chat-19B](https://modelscope.cn/models/ZhipuAI/cogvlm2-llama3-chat-19B/summary)|vision_expert_query_key_value, vision_expert_dense, language_expert_query_key_value, language_expert_dense|cogvlm|&#x2718;|&#x2718;||-|[THUDM/cogvlm2-llama3-chat-19B](https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B)|

docs/source_en/Multi-Modal/cogvlm-best-practice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# CogVLM Best Practices
1+
# CogVLM Best Practice
22

33
## Table of Contents
44
- [Environment Setup](#environment-setup)

docs/source_en/Multi-Modal/cogvlm2-best-practice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# CogVLM2 Best Practices
1+
# CogVLM2 Best Practice
22

33
## Table of Contents
44
- [Environment Setup](#environment-setup)

docs/source_en/Multi-Modal/deepseek-vl-best-practice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Deepseek-VL Best Practices
1+
# Deepseek-VL Best Practice
22

33
## Table of Contents
44
- [Environment Preparation](#environment-preparation)
Lines changed: 19 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,21 @@
11
## Multi-Modal Documentation
22

3-
### Multi-Modal Best Practices
4-
5-
1. [Qwen-VL Best Practices](qwen-vl-best-practice.md)
6-
2. [Qwen-Audio Best Practices](qwen-audio-best-practice.md)
7-
3. [Llava Best Practices](llava-best-practice.md)
8-
4. [Deepseek-VL Best Practices](deepseek-vl-best-practice.md)
9-
5. [Yi-VL Best Practices.md](yi-vl-best-practice.md)
10-
6. [Internlm2-Xcomposers Best Practices](internlm-xcomposer2-best-practice.md)
11-
7. [MiniCPM-V Best Practices](minicpm-v-best-practice.md)
12-
8. [CogVLM Best Practices](cogvlm-best-practice.md), [CogVLM2 Best Practices](cogvlm2-best-practice.md)
13-
9. [InternVL-Chat-V1.5 Best Practices](internvl-best-practice.md)
3+
### Multi-Modal Best Practice
4+
5+
A single round of dialogue can contain multiple images (or no images):
6+
1. [Qwen-VL Best Practice](qwen-vl-best-practice.md)
7+
2. [Qwen-Audio Best Practice](qwen-audio-best-practice.md)
8+
3. [Deepseek-VL Best Practice](deepseek-vl-best-practice.md)
9+
4. [Internlm2-Xcomposers Best Practice](internlm-xcomposer2-best-practice.md)
10+
5. [Phi3-Vision Best Practice](phi3-vision-best-practice.md)
11+
12+
13+
A single round of dialogue can only contain one image:
14+
1. [Llava Best Practice](llava-best-practice.md)
15+
2. [Yi-VL Best Practice.md](yi-vl-best-practice.md)
16+
5. [InternVL-Chat-V1.5 Best Practice](internvl-best-practice.md)
17+
18+
19+
整个对话围绕一张图片:
20+
1. [CogVLM Best Practice](cogvlm-best-practice.md), [CogVLM2 Best Practice](cogvlm2-best-practice.md)
21+
2. [MiniCPM-V Best Practice](minicpm-v-best-practice.md)

0 commit comments

Comments
 (0)