Skip to content

Commit 1d3584a

Browse files
burtenshawstevhliu
andauthored
Apply suggestions from code review
Co-authored-by: Steven Liu <[email protected]>
1 parent f47aa15 commit 1d3584a

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

docs/inference-providers/guides/structured-output.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
In this guide, we'll show you how to use Inference Providers to generate structured outputs that follow a specific JSON schema. This is incredibly useful for building reliable AI applications that need predictable, parseable responses.
44

5-
Instead of hoping the model returns properly formatted JSON, structured outputs guarantee that the response will match your exact schema every time. This eliminates the need for complex parsing logic and makes your applications more robust.
5+
Structured outputs guarantee a model returns a response that matches your exact schema every time. This eliminates the need for complex parsing logic and makes your applications more robust.
66

77
<Tip>
88

@@ -12,7 +12,7 @@ This guide assumes you have a Hugging Face account. If you don't have one, you c
1212

1313
## What Are Structured Outputs?
1414

15-
Structured outputs make sure that AI model responses always follow a specific structure, typically a JSON Schema. This means you get predictable, type-safe data that integrates easily with your systems. Instead of hoping the model returns valid JSON, you give the AI a strict template to follow, so you always get data in the format you expect.
15+
Structured outputs make sure model responses always follow a specific structure, typically a JSON Schema. This means you get predictable, type-safe data that integrates easily with your systems. The model follows a strict template so you always get the data in the format you expect.
1616

1717
Traditionally, getting structured data from LLMs required prompt engineering (asking the model to "respond in JSON format"), post-processing and parsing the response, and sometimes retrying when parsing failed. This approach is unreliable and can lead to brittle applications.
1818

@@ -81,7 +81,7 @@ Install the Hugging Face Hub python package:
8181
pip install huggingface_hub
8282
```
8383

84-
Here's how to initialize the Hugging Face Hub client with the `InferenceClient` class.
84+
Initialize the `InferenceClient` with an Inference Provider (check this [list](https://huggingface.co/docs/inference-providers/index#partners) for all available providers) and your Hugging Face token.
8585

8686
```python
8787
import os
@@ -105,7 +105,7 @@ Install the OpenAI python package:
105105
pip install openai
106106
```
107107

108-
Here's how to initialize the OpenAI client with the `OpenAI` class.
108+
Initialize an `OpenAI` client with the `base_url` and your Hugging Face token.
109109

110110
```python
111111
import os
@@ -127,7 +127,7 @@ client = OpenAI(
127127

128128
<Tip>
129129

130-
Structured outputs are good use case for selecting a specific provider and model because you want to avoid incompatibility issues between the model, provider and the schema.
130+
Structured outputs are a good use case for selecting a specific provider and model because you want to avoid incompatibility issues between the model, provider and the schema.
131131

132132
</Tip>
133133

@@ -236,7 +236,7 @@ Both approaches guarantee that your response will match the specified schema. He
236236

237237
<hfoptions id="structured-outputs-implementation">
238238

239-
The Hugging Face Hub client returns a `ChatCompletion` object, which contains the response from the model as a string. You can then parse the response to get the structured data using the `json.loads` function.
239+
The Hugging Face Hub client returns a `ChatCompletion` object, which contains the response from the model as a string. Use the `json.loads` function to parse the response and get the structured data.
240240

241241
<hfoption id="huggingface_hub">
242242

@@ -255,7 +255,7 @@ print(f"Abstract Summary: {analysis['abstract_summary']}")
255255

256256
</hfoption>
257257

258-
The OpenAI client returns a `ChatCompletion` object, which contains the response from the model as a Python object. You can then access the structured data using the `title` and `abstract_summary` attributes of the `PaperAnalysis` class.
258+
The OpenAI client returns a `ChatCompletion` object, which contains the response from the model as a Python object. Access the structured data using the `title` and `abstract_summary` attributes of the `PaperAnalysis` class.
259259

260260
<hfoption id="openai">
261261

@@ -461,7 +461,7 @@ def analyze_paper_structured():
461461
Now that you understand structured outputs, you probably want to build an application that uses them. Here are some ideas for fun things you can try out:
462462

463463
- **Different models**: Experiment with different models. The biggest models are not always the best for structured outputs!
464-
- **Multi-turn conversations**: Maintaining structured format across conversation turns
465-
- **Complex schemas**: Building domain-specific schemas for your use case
464+
- **Multi-turn conversations**: Maintaining structured format across conversation turns.
465+
- **Complex schemas**: Building domain-specific schemas for your use case.
466466
- **Performance optimization**: Choosing the right provider for your structured output needs.
467467

0 commit comments

Comments
 (0)