Skip to content

Commit c56753c

Browse files
authored
Merge pull request #18 from hmellor/fix-dead-links
Fix dead links to installation docs
2 parents 0366229 + 59933df commit c56753c

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

_posts/2023-06-20-vllm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ This utilization of vLLM has also significantly reduced operational costs. With
108108

109109
### Get started with vLLM
110110

111-
Install vLLM with the following command (check out our [installation guide](https://vllm.readthedocs.io/en/latest/getting_started/installation.html) for more):
111+
Install vLLM with the following command (check out our [installation guide](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) for more):
112112

113113
```bash
114114
$ pip install vllm

_posts/2025-01-10-dev-experience.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ For those who prefer a faster package manager, [**uv**](https://github.com/astra
2929
uv pip install vllm
3030
```
3131

32-
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#install-released-versions) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
32+
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#create-a-new-python-environment) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
3333

3434
```sh
3535
# with cached packages, clean virtual environment
@@ -77,11 +77,11 @@ VLLM_USE_PRECOMPILED=1 pip install -e .
7777

7878
The `VLLM_USE_PRECOMPILED=1` flag instructs the installer to use pre-compiled CUDA kernels instead of building them from source, significantly reducing installation time. This is perfect for developers focusing on Python-level features like API improvements, model support, or integration work.
7979

80-
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#python-only-build-without-compilation) for more advanced usage.
80+
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more advanced usage.
8181

8282
### C++/Kernel Developers
8383

84-
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#full-build-with-compilation) for more details.
84+
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more details.
8585

8686
## Track Changes with Ease
8787

0 commit comments

Comments
 (0)