Skip to content

Commit 600221a

Browse files
committed
Update Blog “using-structured-outputs-in-vllm”
1 parent 665c94b commit 600221a

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

content/blog/using-structured-outputs-in-vllm.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,11 @@ tags:
1111
- opensource
1212
- LLM
1313
---
14+
<style> li { font-size: 27px; line-height: 33px; max-width: none; } </style>
1415
Generating predictable and reliable outputs from large language models (LLMs) can be challenging, especially when those outputs need to integrate seamlessly with downstream systems. Structured outputs solve this problem by enforcing specific formats, such as JSON, regex patterns, or even grammars. vLLM, an open source inference and serving engine for LLMs, supports structured outputs since some time ago, but there were no documentation on how to use it supported this since some time ago, but there were no documentation on how to use it, and that´s why I decided to do a contribution and write the [Structured Outputs documentation page](https://docs.vllm.ai/en/latest/usage/structured_outputs.html).
1516

17+
In this blog post, I'll explain how structured outputs work in vLLM and walk you through how to use them effectively.
18+
1619
## Why structured outputs?
1720

1821
LLMs are incredibly powerful, but their outputs can be inconsistent when a specific format is required. Structured outputs address this issue by restricting the model’s generated text to adhere to predefined rules or formats, ensuring:
@@ -170,8 +173,8 @@ SELECT * FROM users WHERE age > 30;
170173

171174
To start integrating structured outputs into your projects:
172175

173-
1. **Explore the documentation:** Check out the official documentation for more examples and detailed explanations.
174-
2. **Install vLLM locally:** Set up the inference server on your local machine using the vLLM GitHub repository.
176+
1. **Explore the documentation:** Check out the [official documentation](https://docs.vllm.ai/en/latest/) for more examples and detailed explanations.
177+
2. **Install vLLM locally:** Set up the inference server on your local machine using the [vLLM GitHub repository](https://github.com/vllm-project/vllm).
175178
3. **Experiment with structured outputs:** Try out different formats (choice, regex, JSON, grammar) and observe how they can simplify your workflow.
176179
4. **Deploy in production:** Once comfortable, deploy vLLM to your production environment and integrate it with your applications.
177180

0 commit comments

Comments
 (0)