Skip to content

Commit 568dcb7

Browse files
authored
Update README.md for model deployment (#81)
* Update README.md
1 parent 416720e commit 568dcb7

File tree

2 files changed

+61
-17
lines changed

2 files changed

+61
-17
lines changed

README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -576,6 +576,50 @@ response = model.chat(tokenizer, pixel_values, question, generation_config)
576576

577577
</details>
578578

579+
## Chat Web Demo
580+
581+
<details>
582+
<summary>Launch a local chat demo (click to expand)</summary>
583+
584+
**Launch a controller**
585+
586+
```shell
587+
# run the command in the `internvl_chat_llava` folder
588+
python -m llava.serve.controller --host 0.0.0.0 --port 10000
589+
```
590+
591+
**Launch a gradio web server**
592+
593+
```shell
594+
# run the command in the `internvl_chat_llava` folder
595+
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
596+
```
597+
598+
**Launch a model worker**
599+
600+
```shell
601+
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
602+
# run the command in the `internvl_chat_llava` folder
603+
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./path/to/InternVL-Chat-ViT-6B-Vicuna-7B
604+
605+
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-13B
606+
# run the command in the `internvl_chat_llava` folder
607+
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model-path ./path/to/InternVL-Chat-ViT-6B-Vicuna-13B
608+
609+
# OpenGVLab/InternVL-Chat-Chinese-V1-1
610+
# run the command in the `internvl_chat` folder
611+
python -m internvl.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40002 --worker http://localhost:40002 --model-path ./path/to/InternVL-Chat-Chinese-V1-1
612+
613+
# OpenGVLab/InternVL-Chat-Chinese-V1-2
614+
# run the command in the `internvl_chat` folder
615+
python -m internvl.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40003 --worker http://localhost:40003 --model-path ./path/to/InternVL-Chat-Chinese-V1-2
616+
617+
# OpenGVLab/InternVL-Chat-Chinese-V1-2-Plus
618+
# run the command in the `internvl_chat` folder
619+
python -m internvl.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40004 --worker http://localhost:40004 --model-path ./path/to/InternVL-Chat-Chinese-V1-2-Plus
620+
```
621+
</details>
622+
579623
## Schedule
580624

581625
- [x] Release high-resolution models

internvl_chat_llava/README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -128,28 +128,28 @@ pretrained
128128

129129
The method for deploying the demo is consistent with LLaVA-1.5. You only need to change the model path. The specific steps are as follows:
130130

131-
1. Launch a controller
131+
**Launch a controller**
132132

133-
```shell
134-
python -m llava.serve.controller --host 0.0.0.0 --port 10000
135-
```
133+
```shell
134+
python -m llava.serve.controller --host 0.0.0.0 --port 10000
135+
```
136136

137-
2. Launch a gradio web server
137+
**Launch a gradio web server**
138138

139-
```shell
140-
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
141-
```
139+
```shell
140+
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
141+
```
142142

143-
3. Launch a model worker
143+
**Launch a model worker**
144144

145-
```shell
146-
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
147-
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./pretrained/InternVL-Chat-ViT-6B-Vicuna-7B
148-
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-13B
149-
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model-path ./pretrained/InternVL-Chat-ViT-6B-Vicuna-13B
150-
```
151-
152-
For more details to deploy the demo, please refer to [here](#gradio-web-ui).
145+
```shell
146+
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
147+
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./pretrained/InternVL-Chat-ViT-6B-Vicuna-7B
148+
# OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-13B
149+
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40001 --worker http://localhost:40001 --model-path ./pretrained/InternVL-Chat-ViT-6B-Vicuna-13B
150+
```
151+
152+
For more details on deploying the demo, please refer to [here](#gradio-web-ui).
153153

154154
## 💡Testing
155155

0 commit comments

Comments
 (0)