Skip to content

Commit 6a30c64

Browse files
committed
remove dup paragraphs
Signed-off-by: Roger Wang <[email protected]>
1 parent 9233f5c commit 6a30c64

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

_posts/2025-04-05-llama4.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,7 @@ We're excited to announce that vLLM now supports the [Llama 4 herd of models](ht
1212
```
1313
pip install -U vllm
1414
```
15-
16-
with the following sample commands, alternatively you can replace the CLI command with docker run with instruction [here](https://docs.vllm.ai/en/latest/deployment/docker.html) or use our Pythonic interface the [`LLM` class](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#offline-batched-inference) for local batch inference. We also recommend checking out [a demo](https://github.com/meta-llama/llama-cookbook/blob/main/getting-started/build_with_llama_4.ipynb) from the Meta team showcasing the 1M long context capability with vLLM.
17-
18-
Below, you'll find sample commands to get started. Alternatively, you can replace the CLI command with docker run ([instructions here](https://docs.vllm.ai/en/latest/deployment/docker.html)) or use [our Pythonic interface](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#offline-batched-inference), the `LLM` class, for local batch inference. We also recommend checking out the [demo from the Meta team](https://github.com/meta-llama/llama-cookbook/blob/main/getting-started/build_with_llama_4.ipynb) showcasing the 1M long context capability with vLLM.
15+
Below, you'll find sample commands to get started. Alternatively, you can replace the CLI command with docker run ([instructions here](https://docs.vllm.ai/en/latest/deployment/docker.html)) or use our Pythonic interface, the [`LLM` class](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#offline-batched-inference), for local batch inference. We also recommend checking out the [demo from the Meta team](https://github.com/meta-llama/llama-cookbook/blob/main/getting-started/build_with_llama_4.ipynb) showcasing the 1M long context capability with vLLM.
1916

2017
## Usage Guide
2118

0 commit comments

Comments
 (0)