You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post]().
21
+
-[2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post](https://vllm.ai).
22
22
23
23
---
24
24
@@ -62,15 +62,15 @@ Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to get started
62
62
## Performance
63
63
64
64
vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput.
65
-
For details, check out our [blog post]().
65
+
For details, check out our [blog post](https://vllm.ai).
0 commit comments