Skip to content

Commit 794e578

Browse files
authored
[Minor] Fix URLs (#166)
1 parent caddfc1 commit 794e578

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ Easy, fast, and cheap LLM serving for everyone
1010
</h3>
1111

1212
<p align="center">
13-
| <a href="https://vllm.readthedocs.io/en/latest/"><b>Documentation</b></a> | <a href=""><b>Blog</b></a> |
13+
| <a href="https://vllm.readthedocs.io/en/latest/"><b>Documentation</b></a> | <a href="https://vllm.ai"><b>Blog</b></a> | <a href="https://github.com/vllm-project/vllm/discussions"><b>Discussions</b></a> |
1414

1515
</p>
1616

1717
---
1818

1919
*Latest News* 🔥
2020

21-
- [2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post]().
21+
- [2023/06] We officially released vLLM! vLLM has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid April. Check out our [blog post](https://vllm.ai).
2222

2323
---
2424

@@ -62,15 +62,15 @@ Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to get started
6262
## Performance
6363

6464
vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput.
65-
For details, check out our [blog post]().
65+
For details, check out our [blog post](https://vllm.ai).
6666

6767
<p align="center">
6868
<picture>
6969
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a10g_n1_dark.png">
7070
<img src="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a10g_n1_light.png" width="45%">
7171
</picture>
7272
<picture>
73-
<source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a100_n1_dark.png">
73+
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a100_n1_dark.png">
7474
<img src="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a100_n1_light.png" width="45%">
7575
</picture>
7676
<br>
@@ -79,11 +79,11 @@ For details, check out our [blog post]().
7979

8080
<p align="center">
8181
<picture>
82-
<source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a10g_n3_dark.png">
82+
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a10g_n3_dark.png">
8383
<img src="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a10g_n3_light.png" width="45%">
8484
</picture>
8585
<picture>
86-
<source media="(prefers-color-scheme: dark)" srcset="./docs/source/assets/figures/perf_a100_n3_dark.png">
86+
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a100_n3_dark.png">
8787
<img src="https://raw.githubusercontent.com/vllm-project/vllm/main/docs/source/assets/figures/perf_a100_n3_light.png" width="45%">
8888
</picture> <br>
8989
<em> Serving throughput when each request asks for 3 output completions. </em>

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ vLLM is flexible and easy to use with:
4040
* Streaming outputs
4141
* OpenAI-compatible API server
4242

43-
For more information, please refer to our `blog post <>`_.
43+
For more information, please refer to our `blog post <https://vllm.ai>`_.
4444

4545

4646
Documentation

docs/source/models/adding_model.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This document provides a high-level guide on integrating a `HuggingFace Transfor
1818
0. Fork the vLLM repository
1919
--------------------------------
2020

21-
Start by forking our `GitHub <https://github.com/vllm-project/vllm/issues>`_ repository and then :ref:`build it from source <build_from_source>`.
21+
Start by forking our `GitHub <https://github.com/vllm-project/vllm/>`_ repository and then :ref:`build it from source <build_from_source>`.
2222
This gives you the ability to modify the codebase and test your model.
2323

2424

0 commit comments

Comments
 (0)