Skip to content

Commit 767e63b

Browse files
authored
[Docs] Improve docs navigation (#22720)
Signed-off-by: Harry Mellor <[email protected]>
1 parent 007dd90 commit 767e63b

File tree

7 files changed

+40
-19
lines changed

7 files changed

+40
-19
lines changed

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,8 @@ venv.bak/
150150
# mkdocs documentation
151151
/site
152152
docs/argparse
153-
docs/examples
153+
docs/examples/*
154+
!docs/examples/README.md
154155

155156
# mypy
156157
.mypy_cache/

docs/.nav.yml

Lines changed: 7 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,17 @@
11
nav:
2-
- Home:
3-
- vLLM: README.md
2+
- Home: README.md
3+
- User Guide:
4+
- usage/README.md
45
- Getting Started:
56
- getting_started/quickstart.md
67
- getting_started/installation
78
- Examples:
9+
- examples/README.md
810
- Offline Inference: examples/offline_inference
911
- Online Serving: examples/online_serving
1012
- Others: examples/others
11-
- Quick Links:
12-
- User Guide: usage/README.md
13-
- Developer Guide: contributing/README.md
14-
- API Reference: api/README.md
15-
- CLI Reference: cli/README.md
16-
- Timeline:
17-
- Roadmap: https://roadmap.vllm.ai
18-
- Releases: https://github.com/vllm-project/vllm/releases
19-
- User Guide:
20-
- Summary: usage/README.md
21-
- usage/v1_guide.md
2213
- General:
14+
- usage/v1_guide.md
2315
- usage/*
2416
- Inference and Serving:
2517
- serving/offline_inference.md
@@ -32,7 +24,7 @@ nav:
3224
- deployment/integrations
3325
- Training: training
3426
- Configuration:
35-
- Summary: configuration/README.md
27+
- configuration/README.md
3628
- configuration/*
3729
- Models:
3830
- models/supported_models.md
@@ -45,7 +37,7 @@ nav:
4537
- features/*
4638
- features/quantization
4739
- Developer Guide:
48-
- Summary: contributing/README.md
40+
- contributing/README.md
4941
- General:
5042
- glob: contributing/*
5143
flatten_single_child_sections: true

docs/README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,17 @@ vLLM is a fast and easy-to-use library for LLM inference and serving.
2121

2222
Originally developed in the [Sky Computing Lab](https://sky.cs.berkeley.edu) at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
2323

24+
Where to get started with vLLM depends on the type of user. If you are looking to:
25+
26+
- Run open-source models on vLLM, we recommend starting with the [Quickstart Guide](./getting_started/quickstart.md)
27+
- Build applications with vLLM, we recommend starting with the [User Guide](./usage)
28+
- Build vLLM, we recommend starting with [Developer Guide](./contributing)
29+
30+
For information about the development of vLLM, see:
31+
32+
- [Roadmap](https://roadmap.vllm.ai)
33+
- [Releases](https://github.com/vllm-project/vllm/releases)
34+
2435
vLLM is fast with:
2536

2637
- State-of-the-art serving throughput

docs/examples/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Examples
2+
3+
vLLM's examples are split into three categories:
4+
5+
- If you are using vLLM from within Python code, see [Offline Inference](./offline_inference/)
6+
- If you are using vLLM from an HTTP application or client, see [Online Serving](./online_serving/)
7+
- For examples of using some of vLLM's advanced features (e.g. LMCache or Tensorizer) which are not specific to either of the above use cases, see [Others](./others/)

docs/mkdocs/stylesheets/extra.css

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,13 @@ a:not(:has(svg)):not(.md-icon):not(.autorefs-external) {
2323
}
2424
}
2525

26+
a[href*="localhost"]::after,
27+
a[href*="127.0.0.1"]::after,
28+
a[href*="org.readthedocs.build"]::after,
29+
a[href*="docs.vllm.ai"]::after {
30+
display: none !important;
31+
}
32+
2633
/* Light mode: darker section titles */
2734
body[data-md-color-scheme="default"] .md-nav__item--section > label.md-nav__link .md-ellipsis {
2835
color: rgba(0, 0, 0, 0.7) !important;

docs/usage/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
# Using vLLM
22

3-
vLLM supports the following usage patterns:
3+
First, vLLM must be [installed](../getting_started/installation) for your chosen device in either a Python or Docker environment.
4+
5+
Then, vLLM supports the following usage patterns:
46

57
- [Inference and Serving](../serving/offline_inference.md): Run a single instance of a model.
68
- [Deployment](../deployment/docker.md): Scale up model instances for production.

mkdocs.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,14 @@ theme:
3434
- content.action.edit
3535
- content.code.copy
3636
- content.tabs.link
37+
- navigation.instant
38+
- navigation.instant.progress
3739
- navigation.tracking
3840
- navigation.tabs
3941
- navigation.tabs.sticky
4042
- navigation.sections
41-
- navigation.prune
42-
- navigation.top
4343
- navigation.indexes
44+
- navigation.top
4445
- search.highlight
4546
- search.share
4647
- toc.follow

0 commit comments

Comments
 (0)