You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28Lines changed: 28 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -237,6 +237,34 @@ This is the user interface that users will interact with.
237
237
By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now.
238
238
If the models do not show up, try to reboot the gradio web server.
239
239
240
+
## Launch Chatbot Arena (side-by-side battle UI)
241
+
242
+
Currently, Chatbot Arena is powered by FastChat. Here is how you can launch an instance of Chatbot Arena locally.
243
+
244
+
FastChat supports popular API-based models such as OpenAI, Anthropic, Gemini, Mistral and more. To add a custom API, please refer to the model support [doc](./docs/model_support.md). Below we take OpenAI models as an example.
245
+
246
+
Create a JSON configuration file `api_endpoint.json` with the api endpoints of the models you want to serve, for example:
247
+
```
248
+
{
249
+
"gpt-4o-2024-05-13": {
250
+
"model_name": "gpt-4o-2024-05-13",
251
+
"api_base": "https://api.openai.com/v1",
252
+
"api_type": "openai",
253
+
"api_key": [Insert API Key],
254
+
"anony_only": false
255
+
}
256
+
}
257
+
```
258
+
For Anthropic models, specify `"api_type": "anthropic_message"` with your Anthropic key. Similarly, for gemini model, specify `"api_type": "gemini"`. More details can be found in [api_provider.py](https://github.com/lm-sys/FastChat/blob/main/fastchat/serve/api_provider.py).
259
+
260
+
To serve your own model using local gpus, follow the instructions in [Serving with Web GUI](#serving-with-web-gui).
#### (Optional): Advanced Features, Scalability, Third Party UI
241
269
- You can register multiple model workers to a single controller, which can be used for serving a single model with higher throughput or serving multiple models at the same time. When doing so, please allocate different GPUs and ports for different model workers.
0 commit comments