You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
5
-
- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 10 million chat requests for 70+ LLMs.
6
-
- Chatbot Arena has collected over 500K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://leaderboard.lmsys.org).
5
+
- FastChat powers Chatbot Arena ([lmarena.ai](https://lmarena.ai)), serving over 10 million chat requests for 70+ LLMs.
6
+
- Chatbot Arena has collected over 1.5M human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://lmarena.ai/?leaderboard).
7
7
8
8
FastChat's core features include:
9
9
- The training and evaluation code for state-of-the-art models (e.g., Vicuna, MT-Bench).
@@ -26,7 +26,7 @@ FastChat's core features include:
(Experimental Feature: You can specify `--style rich` to enable rich text output and better text streaming quality for some non-ASCII content. This may not work properly on certain terminals.)
To serve using the web UI, you need three main components: web servers that interface with users, model workers that host one or more models, and a controller to coordinate the webserver and model workers. You can learn more about the architecture [here](docs/server_arch.md).
Copy file name to clipboardExpand all lines: docs/arena.md
+34-1Lines changed: 34 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Chatbot Arena
2
-
Chatbot Arena is an LLM benchmark platform featuring anonymous, randomized battles, available at https://chat.lmsys.org.
2
+
Chatbot Arena is an LLM benchmark platform featuring anonymous, randomized battles, available at https://lmarena.ai.
3
3
We invite the entire community to join this benchmarking effort by contributing your votes and models.
4
4
5
5
## How to add a new model
@@ -13,3 +13,36 @@ If you have a model hosted by a 3rd party API provider or yourself, please give
13
13
### Method 2: Hosted by LMSYS
14
14
1. Contribute the code to support this model in FastChat by submitting a pull request. See [instructions](model_support.md).
15
15
2. After the model is supported, we will try to schedule some compute resources to host the model in the arena. However, due to the limited resources we have, we may not be able to serve every model. We will select the models based on popularity, quality, diversity, and other factors.
16
+
17
+
18
+
## How to launch vision arena
19
+
20
+
1. Run `python3 -m fastchat.serve.controller` to start the controller and begin registering local model workers and API-provided workers.
21
+
2. Run `python3 -m fastchat.serve.sglang_worker --model-path <model-path> --tokenizer-path <tokenizer-path>` to run local vision-language models. Currently supported models include the LLaVA and Yi-VL series.
22
+
3. If you are using a 3rd party model with an API provider (e.g. GPT-4-V, Gemini 1.5), please follow the instructions [model_support.md](model_support.md) to add a json file `api_endpoints.json`.
23
+
4. Run the gradio server with the `--vision-arena` flag on.
24
+
5. To run and store images into a remote directory, add the flag: `--use-remote-storage`
25
+
6. To run and allow samples of random questions, add `--random_questions metadata_sampled.json`. Check sections below for how to generate this.
1. Adding NSFW Endpoint and API key: Please add the following environment variables to run the NSFW moderation filter for images:
34
+
-`AZURE_IMG_MODERATION_ENDPOINT`: This is the endpoint that the NSFW moderator is hosted (e.g. https://{endpoint}/contentmoderator/moderate/v1.0/ProcessImage/Evaluate). Change the `endpoint` to your own.
35
+
-`AZURE_IMG_MODERATION_API_KEY`: Your API key to run this endpoint.
36
+
2. Adding CSAM API key:
37
+
-`PHOTODNA_API_KEY`: The API key that runs the CSAM detector endpoint.
We provide random samples of example images for users to interact with coming from various datasets including DocVQA, RealWorldQA, ChartQA and VizWiz-VQA.
48
+
1. Download the images and generate random questions file by running `python fastchat/serve/vision/create_vqa_examples_dir.py`
[DashInfer](https://github.com/modelscope/dash-infer) is a high-performance inference engine specifically optimized for CPU environments, delivering exceptional performance boosts for LLM inference tasks. It supports acceleration for a variety of models including Llama, Qwen, and ChatGLM, making it a versatile choice as a performant worker in FastChat. Notably, DashInfer exhibits significant performance enhancements on both Intel x64 and ARMv9 processors, catering to a wide spectrum of hardware platforms. Its efficient design and optimization techniques ensure rapid and accurate inference capabilities, making it an ideal solution for deploying large language models in resource-constrained environments or scenarios where CPU utilization is preferred over GPU acceleration.
3
+
4
+
## Instructions
5
+
1. Install dash-infer.
6
+
```
7
+
pip install dashinfer
8
+
```
9
+
10
+
2. When you launch a model worker, replace the normal worker (`fastchat.serve.model_worker`) with the dash-infer worker (`fastchat.serve.dashinfer_worker`). All other commands such as controller, gradio web server, and OpenAI API server are kept the same.
Copy file name to clipboardExpand all lines: docs/model_support.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -116,12 +116,21 @@ For custom protocols, implementation of a streaming generator in [fastchat/serve
116
116
"api_type": "openai",
117
117
"api_base": "https://api.openai.com/v1",
118
118
"api_key": "sk-******",
119
-
"anony_only": false
119
+
"anony_only": false,
120
+
"recommended_config": {
121
+
"temperature": 0.7,
122
+
"top_p": 1.0
123
+
},
124
+
"text-arena": true,
125
+
"vision-arena": false,
120
126
}
121
127
}
122
128
```
123
129
- "api_type" can be one of the following: openai, anthropic, gemini, mistral, yandexgpt or reka. For custom APIs, add a new type and implement it accordingly.
124
130
- "anony_only" indicates whether to display this model in anonymous mode only.
131
+
- "recommended_config" indicates the recommended generation parameters for temperature and top_p.
132
+
- "text-arena" indicates whether the model should be displayed in the Text Arena.
133
+
- "vision-arena" indicates whether the model should be displayed in the Vision Arena.
125
134
126
135
2. Launch the Gradio web server with the argument `--register api_endpoints.json`:
<span style='color: #DE3163; font-weight: bold;'>We would love your feedback! Fill out <a href='https://docs.google.com/forms/d/e/1FAIpQLSfKSxwFOW6qD05phh4fwYjk8q0YV1VQe_bmK0_qOVTbC66_MA/viewform?usp=sf_link' style='color: #DE3163; text-decoration: underline;'>this short survey</a> to tell us what you like about the arena, what you don't like, and what you want to see in the future.</span>
14
+
</div>
15
+
</div>"""
16
+
10
17
##### For the gradio web server
11
18
SERVER_ERROR_MSG= (
12
19
"**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**"
13
20
)
21
+
TEXT_MODERATION_MSG= (
22
+
"$MODERATION$ YOUR TEXT VIOLATES OUR CONTENT MODERATION GUIDELINES."
23
+
)
24
+
IMAGE_MODERATION_MSG= (
25
+
"$MODERATION$ YOUR IMAGE VIOLATES OUR CONTENT MODERATION GUIDELINES."
26
+
)
14
27
MODERATION_MSG="$MODERATION$ YOUR INPUT VIOLATES OUR CONTENT MODERATION GUIDELINES."
15
28
CONVERSATION_LIMIT_MSG="YOU HAVE REACHED THE CONVERSATION LENGTH LIMIT. PLEASE CLEAR HISTORY AND START A NEW CONVERSATION."
16
29
INACTIVE_MSG="THIS SESSION HAS BEEN INACTIVE FOR TOO LONG. PLEASE REFRESH THIS PAGE."
17
30
SLOW_MODEL_MSG="⚠️ Both models will show the responses all at once. Please stay patient as it may take over 30 seconds."
18
-
RATE_LIMIT_MSG="**RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR USE BATTLE MODE (the 1st tab).**"
31
+
RATE_LIMIT_MSG="**RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR USE <span style='color: red; font-weight: bold;'>[BATTLE MODE](https://lmarena.ai)</span> (the 1st tab).**"
0 commit comments