You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
5
-
- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 10 million chat requests for 70+ LLMs.
6
-
- Chatbot Arena has collected over 500K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://leaderboard.lmsys.org).
5
+
- FastChat powers Chatbot Arena ([lmarena.ai](https://lmarena.ai)), serving over 10 million chat requests for 70+ LLMs.
6
+
- Chatbot Arena has collected over 1.5M human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://lmarena.ai/?leaderboard).
7
7
8
8
FastChat's core features include:
9
9
- The training and evaluation code for state-of-the-art models (e.g., Vicuna, MT-Bench).
@@ -26,7 +26,7 @@ FastChat's core features include:
(Experimental Feature: You can specify `--style rich` to enable rich text output and better text streaming quality for some non-ASCII content. This may not work properly on certain terminals.)
To serve using the web UI, you need three main components: web servers that interface with users, model workers that host one or more models, and a controller to coordinate the webserver and model workers. You can learn more about the architecture [here](docs/server_arch.md).
208
208
@@ -237,6 +237,33 @@ This is the user interface that users will interact with.
237
237
By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now.
238
238
If the models do not show up, try to reboot the gradio web server.
239
239
240
+
## Launch Chatbot Arena (side-by-side battle UI)
241
+
242
+
Currently, Chatbot Arena is powered by FastChat. Here is how you can launch an instance of Chatbot Arena locally.
243
+
244
+
FastChat supports popular API-based models such as OpenAI, Anthropic, Gemini, Mistral and more. To add a custom API, please refer to the model support [doc](./docs/model_support.md). Below we take OpenAI models as an example.
245
+
246
+
Create a JSON configuration file `api_endpoint.json` with the api endpoints of the models you want to serve, for example:
247
+
```
248
+
{
249
+
"gpt-4o-2024-05-13": {
250
+
"model_name": "gpt-4o-2024-05-13",
251
+
"api_base": "https://api.openai.com/v1",
252
+
"api_type": "openai",
253
+
"api_key": [Insert API Key],
254
+
"anony_only": false
255
+
}
256
+
}
257
+
```
258
+
For Anthropic models, specify `"api_type": "anthropic_message"` with your Anthropic key. Similarly, for gemini model, specify `"api_type": "gemini"`. More details can be found in [api_provider.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py).
259
+
260
+
To serve your own model using local gpus, follow the instructions in [Serving with Web GUI](#serving-with-web-gui).
#### (Optional): Advanced Features, Scalability, Third Party UI
241
268
- You can register multiple model workers to a single controller, which can be used for serving a single model with higher throughput or serving multiple models at the same time. When doing so, please allocate different GPUs and ports for different model workers.
[DashInfer](https://github.com/modelscope/dash-infer) is a high-performance inference engine specifically optimized for CPU environments, delivering exceptional performance boosts for LLM inference tasks. It supports acceleration for a variety of models including Llama, Qwen, and ChatGLM, making it a versatile choice as a performant worker in FastChat. Notably, DashInfer exhibits significant performance enhancements on both Intel x64 and ARMv9 processors, catering to a wide spectrum of hardware platforms. Its efficient design and optimization techniques ensure rapid and accurate inference capabilities, making it an ideal solution for deploying large language models in resource-constrained environments or scenarios where CPU utilization is preferred over GPU acceleration.
3
+
4
+
## Instructions
5
+
1. Install dash-infer.
6
+
```
7
+
pip install dashinfer
8
+
```
9
+
10
+
2. When you launch a model worker, replace the normal worker (`fastchat.serve.model_worker`) with the dash-infer worker (`fastchat.serve.dashinfer_worker`). All other commands such as controller, gradio web server, and OpenAI API server are kept the same.
<span style='font-weight: bold; font-size: 20px; color: #050505; '>🔔 New Arena UI at <a href='https://lmarena.ai/leaderboard?utm_campaign=hf_banner' target="_blank" rel="noopener noreferrer" style="color: #233F9C; text-decoration: underline;">lmarena.ai/leaderboard</a>! Check it out and give feedback!</a></span>
22
+
</div>
23
+
</div>"""
24
+
10
25
##### For the gradio web server
11
26
SERVER_ERROR_MSG= (
12
27
"**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**"
@@ -20,12 +35,14 @@
20
35
MODERATION_MSG="$MODERATION$ YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES."
21
36
CONVERSATION_LIMIT_MSG="YOU HAVE REACHED THE CONVERSATION LENGTH LIMIT. PLEASE CLEAR HISTORY AND START A NEW CONVERSATION."
22
37
INACTIVE_MSG="THIS SESSION HAS BEEN INACTIVE FOR TOO LONG. PLEASE REFRESH THIS PAGE."
23
-
SLOW_MODEL_MSG="⚠️ Both models will show the responses all at once. Please stay patient as it may take over 30 seconds."
24
-
RATE_LIMIT_MSG="**RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR USE BATTLE MODE (the 1st tab).**"
38
+
SLOW_MODEL_MSG= (
39
+
"⚠️ Models are thinking. Please stay patient as it may take over a minute."
40
+
)
41
+
RATE_LIMIT_MSG="**RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR USE <span style='color: red; font-weight: bold;'>[BATTLE MODE](https://lmarena.ai)</span> (the 1st tab).**"
0 commit comments