You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generating predictable and reliable outputs from large language models (LLMs) can be challenging, especially when those outputs need to integrate seamlessly with downstream systems. Structured outputs solve this problem by enforcing specific formats, such as JSON, regex patterns, or even grammars. vLLM, an open source inference and serving engine for LLMs, supports structured outputs since some time ago, but there were no documentation on how to use it supported this since some time ago, but there were no documentation on how to use it, and that´s why I decided to do a contribution and write the [Structured Outputs documentation page](https://docs.vllm.ai/en/latest/usage/structured_outputs.html).
15
+
Generating predictable and reliable outputs from large language models (LLMs) can be challenging, especially when those outputs need to integrate seamlessly with downstream systems. Structured outputs solve this problem by enforcing specific formats, such as JSON, regex patterns, or even formal grammars. vLLM, an open source inference and serving engine for LLMs, has supported structured outputs for a while. However, there is little documentation on how to use it. This is why I decided to contribute and write the [Structured Outputs documentation page](https://docs.vllm.ai/en/latest/usage/structured_outputs.html).
16
16
17
17
In this blog post, I'll explain how structured outputs work in vLLM and walk you through how to use them effectively.
18
18
@@ -24,9 +24,9 @@ LLMs are incredibly powerful, but their outputs can be inconsistent when a speci
24
24
2.**Compatibility:** Seamless integration with APIs, databases, or other systems.
25
25
3.**Efficiency:** No need for extensive post-processing to validate or fix outputs.
26
26
27
-
Imagine we have an external system which receives a JSON with the all the details to trigger an alert, and we want our LLM-based system to be able to use it. Of course we can try to explain the LLM what should be the output format and that it must be a valid JSON, but LLMs are not deterministic and thus we may end up with an invalid JSON. Probably, if you have tried to do something like this before, you would have found yourself in this situation.
27
+
Imagine there is an external system which receives a JSON object with the all the details to trigger an alert, and you want your LLM-based system to be able to use it. Of course you could try to explain the LLM what should be the output format and that it must be a valid JSON object, but LLMs are not deterministic and thus we may end up with an invalid JSON. Probably, if you have tried to do something like this before, you would have found yourself in this situation.
28
28
29
-
How these tools work? The idea is that we´ll be able to filter the list of possible next tokens to force that we are always generating a token that is valid for the desired output format.
29
+
How do these tools work? The idea behind them is to filter a list of possible next tokens to force a valid token to be generated that produces the desired output format, for example, a valid JSON object.
30
30
31
31

32
32
@@ -53,7 +53,7 @@ Here’s how each works, along with example outputs:
53
53
54
54
### **1. Guided choice**
55
55
56
-
Simplest form of structured output, ensuring the response is one of a set of predefined options.
56
+
Guided choice is the simplest form of structured output. It ensures the response is one from of a set of predefined options.
57
57
58
58
```python
59
59
from openai import OpenAI
@@ -78,7 +78,7 @@ positive
78
78
79
79
### **2. Guided Regex**
80
80
81
-
Constrains output to match a regex pattern, useful for formats like email addresses.
81
+
A guided regex constrains the output to match a regex pattern, which is useful for formats like email addresses.
0 commit comments