Skip to content

Commit feacc14

Browse files
committed
Merge remote-tracking branch 'origin/main' into ptpc-blog
2 parents 804b617 + ded60c9 commit feacc14

File tree

7 files changed

+36
-8
lines changed

7 files changed

+36
-8
lines changed

Gemfile.lock

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ GEM
217217
gemoji (>= 3, < 5)
218218
html-pipeline (~> 2.2)
219219
jekyll (>= 3.0, < 5.0)
220-
json (2.10.1)
220+
json (2.10.2)
221221
kramdown (2.4.0)
222222
rexml
223223
kramdown-parser-gfm (1.1.0)
@@ -270,7 +270,7 @@ GEM
270270
tzinfo (2.0.6)
271271
concurrent-ruby (~> 1.0)
272272
unicode-display_width (1.8.0)
273-
uri (1.0.2)
273+
uri (1.0.3)
274274
webrick (1.9.1)
275275

276276
PLATFORMS

README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,22 @@ To add a new blogpost, please refer to `_posts/2023-06-20-vllm.md` as an example
1515

1616
The blog is automatically built and deployed by GitHub Actions when `main` is pushed to.
1717

18+
## LaTeX Math
19+
20+
The blog supports LaTeX math via [MathJax](https://docs.mathjax.org/en/latest/index.html).
21+
22+
It can be enabled by adding `math: true` to the document frontmatter. It has been configured to support the standard LaTeX style math notation, i.e.:
23+
24+
```latex
25+
$ inline math $
26+
```
27+
28+
```latex
29+
$$
30+
math block
31+
$$
32+
```
33+
1834
## Theme customization
1935

2036
The theme we are using is [Minima](https://github.com/jekyll/minima). If you need to customise anything from this theme, see [Overriding theme defaults](https://jekyllrb.com/docs/themes/#overriding-theme-defaults).

_includes/custom-head.html

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{% if page.math %}
2+
<script>
3+
MathJax = {
4+
tex: {
5+
inlineMath: [['$', '$'], ['\\(', '\\)']],
6+
displayMath: [['$$', '$$'], ['\\[', '\\]']]
7+
}
8+
};
9+
</script>
10+
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
11+
</script>
12+
{% endif %}

_posts/2023-06-20-vllm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ This utilization of vLLM has also significantly reduced operational costs. With
108108

109109
### Get started with vLLM
110110

111-
Install vLLM with the following command (check out our [installation guide](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) for more):
111+
Install vLLM with the following command (check out our [installation guide](https://docs.vllm.ai/en/latest/getting_started/installation.html) for more):
112112

113113
```bash
114114
$ pip install vllm

_posts/2024-09-05-perf-update.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ Importantly, we will also focus on improving the core of vLLM to reduce the comp
150150

151151
### Get Involved
152152

153-
If you haven’t, we highly recommend you to update the vLLM version (see instructions [here](https://docs.vllm.ai/en/latest/getting_started/installation/index.html)) and try it out for yourself\! We always love to learn more about your use cases and how we can make vLLM better for you. The vLLM team can be reached out via [[email protected]](mailto:[email protected]). vLLM is also a community project, if you are interested in participating and contributing, we welcome you to check out our [roadmap](https://roadmap.vllm.ai/) and see [good first issues](https://github.com/vllm-project/vllm/issues?q=is:open+is:issue+label:%22good+first+issue%22) to tackle. Stay tuned for more updates by [following us on X](https://x.com/vllm\_project).
153+
If you haven’t, we highly recommend you to update the vLLM version (see instructions [here](https://docs.vllm.ai/en/latest/getting_started/installation.html)) and try it out for yourself\! We always love to learn more about your use cases and how we can make vLLM better for you. The vLLM team can be reached out via [[email protected]](mailto:[email protected]). vLLM is also a community project, if you are interested in participating and contributing, we welcome you to check out our [roadmap](https://roadmap.vllm.ai/) and see [good first issues](https://github.com/vllm-project/vllm/issues?q=is:open+is:issue+label:%22good+first+issue%22) to tackle. Stay tuned for more updates by [following us on X](https://x.com/vllm\_project).
154154

155155
If you are in the Bay Area, you can meet the vLLM team at the following events: [vLLM’s sixth meetup with NVIDIA(09/09)](https://lu.ma/87q3nvnh), [PyTorch Conference (09/19)](https://pytorch2024.sched.com/event/1fHmx/vllm-easy-fast-and-cheap-llm-serving-for-everyone-woosuk-kwon-uc-berkeley-xiaoxuan-liu-ucb), [CUDA MODE IRL meetup (09/21)](https://events.accel.com/cudamode), and [the first ever vLLM track at Ray Summit (10/01-02)](https://raysummit.anyscale.com/flow/anyscale/raysummit2024/landing/page/sessioncatalog?search.sessiontracks=1719251906298001uzJ2).
156156

_posts/2025-01-10-dev-experience.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ For those who prefer a faster package manager, [**uv**](https://github.com/astra
2929
uv pip install vllm
3030
```
3131

32-
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#create-a-new-python-environment) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
32+
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#create-a-new-python-environment) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
3333

3434
```sh
3535
# with cached packages, clean virtual environment
@@ -77,11 +77,11 @@ VLLM_USE_PRECOMPILED=1 pip install -e .
7777

7878
The `VLLM_USE_PRECOMPILED=1` flag instructs the installer to use pre-compiled CUDA kernels instead of building them from source, significantly reducing installation time. This is perfect for developers focusing on Python-level features like API improvements, model support, or integration work.
7979

80-
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more advanced usage.
80+
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#build-wheel-from-source) for more advanced usage.
8181

8282
### C++/Kernel Developers
8383

84-
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more details.
84+
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#build-wheel-from-source) for more details.
8585

8686
## Track Changes with Ease
8787

_posts/2025-01-27-intro-to-llama-stack-with-vllm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ huggingface-cli login --token <YOUR-HF-TOKEN>
4949
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --local-dir /tmp/test-vllm-llama-stack/.cache/huggingface/hub/models/Llama-3.2-1B-Instruct
5050
```
5151

52-
Next, let's build the vLLM CPU container image from source. Note that while we use it for demonstration purposes, there are plenty of [other images available for different hardware and architectures](https://docs.vllm.ai/en/latest/getting_started/installation/index.html).
52+
Next, let's build the vLLM CPU container image from source. Note that while we use it for demonstration purposes, there are plenty of [other images available for different hardware and architectures](https://docs.vllm.ai/en/latest/getting_started/installation.html).
5353

5454
```
5555
git clone [email protected]:vllm-project/vllm.git /tmp/test-vllm-llama-stack

0 commit comments

Comments
 (0)