You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FastChat is an open platform for training, serving, and evaluating large language model based chatbots.
5
-
- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 6 million chat requests for 50+ LLMs.
6
-
- Chatbot Arena has collected over 200K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
5
+
- FastChat powers Chatbot Arena (https://chat.lmsys.org/), serving over 10 million chat requests for 70+ LLMs.
6
+
- Chatbot Arena has collected over 500K human votes from side-by-side LLM battles to compile an online [LLM Elo leaderboard](https://leaderboard.lmsys.org).
7
7
8
8
FastChat's core features include:
9
9
- The training and evaluation code for state-of-the-art models (e.g., Vicuna, MT-Bench).
10
10
- A distributed multi-model serving system with web UI and OpenAI-compatible RESTful APIs.
11
11
12
12
## News
13
-
-[2023/09] 🔥 We released **LMSYS-Chat-1M**, a large-scale real-world LLM conversation dataset. Read the [report](https://arxiv.org/abs/2309.11998).
13
+
-[2024/03] 🔥 We released Chatbot Arena technical [report](https://arxiv.org/abs/2403.04132).
14
+
-[2023/09] We released **LMSYS-Chat-1M**, a large-scale real-world LLM conversation dataset. Read the [report](https://arxiv.org/abs/2309.11998).
14
15
-[2023/08] We released **Vicuna v1.5** based on Llama 2 with 4K and 16K context lengths. Download [weights](#vicuna-weights).
15
16
-[2023/07] We released **Chatbot Arena Conversations**, a dataset containing 33k conversations with human preferences. Download it [here](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations).
Copy file name to clipboardExpand all lines: docs/model_support.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -105,7 +105,7 @@ After these steps, the new model should be compatible with most FastChat feature
105
105
## API-Based Models
106
106
To support an API-based model, consider learning from the existing OpenAI example.
107
107
If the model is compatible with OpenAI APIs, then a configuration file is all that's needed without any additional code.
108
-
For custom protocols, implementation of a streaming generator in [fastchat/serve/api_provider.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py) is required, following the provided examples. Currently, FastChat is compatible with OpenAI, Anthropic, Google Vertex AI, Mistral, and Nvidia NGC.
108
+
For custom protocols, implementation of a streaming generator in [fastchat/serve/api_provider.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py) is required, following the provided examples. Currently, FastChat is compatible with OpenAI, Anthropic, Google Vertex AI, Mistral, Nvidia NGC, YandexGPT and Reka.
109
109
110
110
### Steps to Launch a WebUI with an API Model
111
111
1. Specify the endpoint information in a JSON configuration file. For instance, create a file named `api_endpoints.json`:
@@ -120,7 +120,7 @@ For custom protocols, implementation of a streaming generator in [fastchat/serve
120
120
}
121
121
}
122
122
```
123
-
- "api_type" can be one of the following: openai, anthropic, gemini, or mistral. For custom APIs, add a new type and implement it accordingly.
123
+
- "api_type" can be one of the following: openai, anthropic, gemini, mistral, yandexgpt or reka. For custom APIs, add a new type and implement it accordingly.
124
124
- "anony_only" indicates whether to display this model in anonymous mode only.
125
125
126
126
2. Launch the Gradio web server with the argument `--register api_endpoints.json`:
0 commit comments