Skip to content

Commit 70f6a15

Browse files
committed
polish
Signed-off-by: youkaichao <[email protected]>
1 parent 9990f00 commit 70f6a15

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

_posts/2025-01-10-dev-experience.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The field of LLM inference is advancing at an unprecedented pace. With new model
1111

1212
* Flexible and fast installation options from stable releases to nightly builds.
1313
* Streamlined development workflow for both Python and C++/CUDA developers.
14-
* Robust version tracking capabilities for production environments.
14+
* Robust version tracking capabilities for production deployments.
1515

1616
## Seamless Installation of vLLM Versions
1717

@@ -29,7 +29,7 @@ For those who prefer a faster package manager, [**uv**](https://github.com/astra
2929
uv pip install vllm
3030
```
3131

32-
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#install-released-versions) for more details on setting up [**uv**](https://github.com/astral-sh/uv). With a simple server-grade setup (Intel 8th Gen CPU), we can see [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
32+
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#install-released-versions) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
3333

3434
```sh
3535
# with cached packages, clean virtual environment
@@ -67,7 +67,7 @@ We understand that an active, engaged developer community is the backbone of inn
6767

6868
### Python Developers
6969

70-
For Python developers who need to tweak and test vLLM’s Python code, there’s no need to compile kernels. Our solution allows you to begin development quickly:
70+
For Python developers who need to tweak and test vLLM’s Python code, there’s no need to compile kernels. This setup enables you to start development quickly.
7171

7272
```sh
7373
git clone https://github.com/vllm-project/vllm.git
@@ -77,15 +77,15 @@ VLLM_USE_PRECOMPILED=1 pip install -e .
7777

7878
The `VLLM_USE_PRECOMPILED=1` flag instructs the installer to use pre-compiled CUDA kernels instead of building them from source, significantly reducing installation time. This is perfect for developers focusing on Python-level features like API improvements, model support, or integration work.
7979

80-
This lightweight process runs efficiently, even on a laptop. For more advanced usage, please check the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#python-only-build-without-compilation).
80+
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#python-only-build-without-compilation) for more advanced usage.
8181

8282
### C++/Kernel Developers
8383

8484
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu-cuda.html#full-build-with-compilation) for more details.
8585

8686
## Track Changes with Ease
8787

88-
The fast-evolving nature of LLM inference means interfaces and behaviors are still stabilizing. vLLM has been integrated into many workflows, including [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [veRL](https://github.com/volcengine/verl), [open_instruct](https://github.com/allenai/open-instruct), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), etc. We are working with them to stabilize interfaces and behaviors for LLM inference. To facilitate the process, we provide powerful tools for these powerful users to track changes across versions.
88+
The fast-evolving nature of LLM inference means interfaces and behaviors are still stabilizing. vLLM has been integrated into many workflows, including [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [veRL](https://github.com/volcengine/verl), [open_instruct](https://github.com/allenai/open-instruct), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), etc. We collaborate with these projects to stabilize interfaces and behaviors for LLM inference. To facilitate the process, we provide powerful tools for these advanced users to track changes across versions.
8989

9090
### Installing a Specific Commit
9191

@@ -113,8 +113,8 @@ pip install https://wheels.vllm.ai/${VLLM_COMMIT}/vllm-1.0.0.dev-cp38-abi3-manyl
113113

114114
At vLLM, our commitment extends beyond delivering high-performance software. We’re building a system that empowers trust, enables transparent tracking of changes, and invites active participation. Together, we can shape the future of AI, pushing the boundaries of innovation while making it accessible to all.
115115

116-
For collaboration requests or inquiries, reach out at [[email protected]](mailto:[email protected]). Join our growing community on [GitHub](https://github.com/vllm-project/vllm) or connect with us on the [vLLM Slack](https://slack.vllm.ai/). Let’s drive AI innovation forward, together.
116+
For collaboration requests or inquiries, reach out at [[email protected]](mailto:[email protected]). Join our growing community on [GitHub](https://github.com/vllm-project/vllm) or connect with us on the [vLLM Slack](https://slack.vllm.ai/). Together, let’s drive AI innovation forward.
117117

118118
## Acknowledgments
119119

120-
We would like to express our gratitude to the [uv community](https://docs.astral.sh/uv/) -- particularly [Charlie Marsh](https://github.com/charliermarsh) -- for creating a fast, innovative package manager. Special thanks to [Kevin Luu](https://github.com/khluu) (Anyscale), [Daniele Trifirò](https://github.com/dtrifiro) (Red Hat), and [Michael Goin](https://github.com/mgoin) (Neural Magic) for their invaluable contributions to streamlining workflows. [Kaichao You](https://github.com/youkaichao) and [Simon Mo](https://github.com/simon-mo) from the UC Berkeley team lead these efforts.
120+
We extend our gratitude to the [uv community](https://docs.astral.sh/uv/) particularly [Charlie Marsh](https://github.com/charliermarsh) for creating a fast, innovative package manager. Special thanks to [Kevin Luu](https://github.com/khluu) (Anyscale), [Daniele Trifirò](https://github.com/dtrifiro) (Red Hat), and [Michael Goin](https://github.com/mgoin) (Neural Magic) for their invaluable contributions to streamlining workflows. [Kaichao You](https://github.com/youkaichao) and [Simon Mo](https://github.com/simon-mo) from the UC Berkeley team lead these efforts.

0 commit comments

Comments
 (0)