Skip to content

Commit 4897e23

Browse files
authored
add structured outputs information to readme (#181)
1 parent 65a9468 commit 4897e23

File tree

1 file changed

+55
-0
lines changed

1 file changed

+55
-0
lines changed

README.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
1717
- [Using the `OpenAIClient` class](#using-the-openaiclient-class)
1818
- [How to use chat completions with streaming](#how-to-use-chat-completions-with-streaming)
1919
- [How to use chat completions with tools and function calling](#how-to-use-chat-completions-with-tools-and-function-calling)
20+
- [How to use structured outputs](#how-to-use-structured-outputs)
2021
- [How to generate text embeddings](#how-to-generate-text-embeddings)
2122
- [How to generate images](#how-to-generate-images)
2223
- [How to transcribe audio](#how-to-transcribe-audio)
@@ -296,6 +297,60 @@ do
296297
} while (requiresAction);
297298
```
298299

300+
## How to use structured outputs
301+
302+
Beginning with the `gpt-4o-mini`, `gpt-4o-mini-2024-07-18`, and `gpt-4o-2024-08-06` model snapshots, structured outputs are available for both top-level response content and tool calls in the chat completion and assistants APIs.
303+
304+
For information about the feature, see [the Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs/introduction).
305+
306+
To use structured outputs to constrain chat completion content, set an appropriate `ChatResponseFormat` as in the following example:
307+
308+
```csharp
309+
ChatCompletionOptions options = new()
310+
{
311+
ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(
312+
name: "math_reasoning",
313+
jsonSchema: BinaryData.FromString("""
314+
{
315+
"type": "object",
316+
"properties": {
317+
"steps": {
318+
"type": "array",
319+
"items": {
320+
"type": "object",
321+
"properties": {
322+
"explanation": { "type": "string" },
323+
"output": { "type": "string" }
324+
},
325+
"required": ["explanation", "output"],
326+
"additionalProperties": false
327+
}
328+
},
329+
"final_answer": { "type": "string" }
330+
},
331+
"required": ["steps", "final_answer"],
332+
"additionalProperties": false
333+
}
334+
"""),
335+
strictSchemaEnabled: true)
336+
};
337+
338+
ChatCompletion chatCompletion = await client.CompleteChatAsync(
339+
["How can I solve 8x + 7 = -23?"],
340+
options);
341+
342+
using JsonDocument structuredJson = JsonDocument.Parse(chatCompletion.ToString());
343+
344+
Console.WriteLine($"Final answer: {structuredJson.RootElement.GetProperty("final_answer").GetString()}");
345+
Console.WriteLine("Reasoning steps:");
346+
347+
foreach (JsonElement stepElement in structuredJson.RootElement.GetProperty("steps").EnumerateArray())
348+
{
349+
Console.WriteLine($" - Explanation: {stepElement.GetProperty("explanation").GetString()}");
350+
Console.WriteLine($" Output: {stepElement.GetProperty("output")}");
351+
}
352+
```
353+
299354
## How to generate text embeddings
300355

301356
In this example, you want to create a trip-planning website that allows customers to write a prompt describing the kind of hotel that they are looking for and then offers hotel recommendations that closely match this description. To achieve this, it is possible to use text embeddings to measure the relatedness of text strings. In summary, you can get embeddings of the hotel descriptions, store them in a vector database, and use them to build a search index that you can query using the embedding of a given customer's prompt.

0 commit comments

Comments
 (0)