Skip to content

Commit 5a93c47

Browse files
authored
Update python.md
1 parent 0784229 commit 5a93c47

File tree

1 file changed

+25
-1
lines changed
  • articles/ai-foundry/model-inference/includes/use-chat-reasoning

1 file changed

+25
-1
lines changed

articles/ai-foundry/model-inference/includes/use-chat-reasoning/python.md

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,6 @@ Some reasoning models, like DeepSeek-R1, generate completions and include the re
178178

179179
The reasoning associated with the completion is included in the field `reasoning_content`. The model may select on which scenearios to generate reasoning content.
180180

181-
```python
182181
```python
183182
print("Thinking:", response.choices[0].message.reasoning_content)
184183
```
@@ -247,6 +246,8 @@ To visualize the output, define a helper function to print the stream. The follo
247246

248247
# [OpenAI](#tab/openai)
249248

249+
Reasoning content is also included inside of the delta pieces of the response, in the key `reasoning_content`.
250+
250251
```python
251252
def print_stream(completion):
252253
"""
@@ -269,6 +270,8 @@ def print_stream(completion):
269270

270271
# [Model Inference (preview)](#tab/inference)
271272

273+
When streaming, pay closer attention to the `<think>` tag that may be included inside of the `content` field.
274+
272275
```python
273276
def print_stream(completion):
274277
"""
@@ -315,6 +318,27 @@ The following example shows how to handle events when the model detects harmful
315318

316319
# [OpenAI](#tab/openai)
317320

321+
```python
322+
try:
323+
response = client.chat.completions.create(
324+
model="deepseek-r1",
325+
messages=[
326+
{"role": "user", "content": "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."}
327+
],
328+
)
329+
330+
print(response.choices[0].message.content)
331+
332+
except HttpResponseError as ex:
333+
if ex.status_code == 400:
334+
response = ex.response.json()
335+
if isinstance(response, dict) and "error" in response:
336+
print(f"Your request triggered an {response['error']['code']} error:\n\t {response['error']['message']}")
337+
else:
338+
raise
339+
raise
340+
```
341+
318342
# [Model Inference (preview)](#tab/inference)
319343

320344
```python

0 commit comments

Comments
 (0)