You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it.
176
176
177
-
# [OpenAI](#tab/openai)
177
+
# [OpenAI API](#tab/openai)
178
178
179
179
The reasoning associated with the completion is included in the field `reasoning_content`. The model may select on which scenearios to generate reasoning content.
Thinking: Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer...
187
187
```
188
188
189
-
# [Model Inference (preview)](#tab/inference)
189
+
# [Model Inference API (preview)](#tab/inference)
190
190
191
191
The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content. You can extract the reasoning content from the response to understand the model's thought process as follows:
192
192
@@ -216,7 +216,7 @@ You can _stream_ the content to get it as it's being generated. Streaming conten
216
216
217
217
To stream completions, set`stream=True` when you call the model.
To visualize the output, define a helper function to print the stream. The following example implements a routing that stream only the answer without the reasoning content:
246
246
247
-
# [OpenAI](#tab/openai)
247
+
# [OpenAI API](#tab/openai)
248
248
249
249
Reasoning content is also included inside of the delta pieces of the response, in the key `reasoning_content`.
250
250
@@ -268,7 +268,7 @@ def print_stream(completion):
268
268
print(content, end="", flush=True)
269
269
```
270
270
271
-
# [Model Inference (preview)](#tab/inference)
271
+
# [Model Inference API (preview)](#tab/inference)
272
272
273
273
When streaming, pay closer attention to the `<think>` tag that may be included inside of the `content` field.
274
274
@@ -316,7 +316,7 @@ The Azure AI Model Inference API supports [Azure AI Content Safety](https://aka.
316
316
317
317
The following example shows how to handle events when the model detects harmful content in the input prompt.
318
318
319
-
# [OpenAI](#tab/openai)
319
+
# [OpenAI API](#tab/openai)
320
320
321
321
```python
322
322
try:
@@ -339,7 +339,7 @@ except HttpResponseError as ex:
339
339
raise
340
340
```
341
341
342
-
# [Model Inference (preview)](#tab/inference)
342
+
# [Model Inference API (preview)](#tab/inference)
343
343
344
344
```python
345
345
from azure.ai.inference.models import AssistantMessage, UserMessage
0 commit comments