Skip to content

Commit e661df3

Browse files
authored
Update python.md
1 parent 6f2283b commit e661df3

File tree

1 file changed

+140
-20
lines changed
  • articles/ai-foundry/model-inference/includes/use-chat-reasoning

1 file changed

+140
-20
lines changed

articles/ai-foundry/model-inference/includes/use-chat-reasoning/python.md

Lines changed: 140 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,21 @@ To complete this tutorial, you need:
2929

3030
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3131

32+
# [OpenAI](#tab/openai)
33+
34+
```python
35+
import os
36+
from openai import AzureOpenAI
37+
38+
client = AzureOpenAI(
39+
azure_endpoint = "https://<resource>.services.ai.azure.com"
40+
api_key=os.getenv("AZURE_INFERENCE_CREDENTIAL"),
41+
api_version="2024-10-21",
42+
)
43+
```
44+
45+
# [Model Inference (preview)](#tab/inference)
46+
3247
```python
3348
import os
3449
from azure.ai.inference import ChatCompletionsClient
@@ -40,12 +55,30 @@ client = ChatCompletionsClient(
4055
model="deepseek-r1"
4156
)
4257
```
43-
44-
> [!TIP]
45-
> Verify that you have deployed the model to Azure AI Services resource with The Azure AI Model Inference API. `Deepseek-R1` is also available as standard deployments. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
58+
---
4659

4760
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4861

62+
# [OpenAI](#tab/openai)
63+
64+
```python
65+
import os
66+
from openai import AzureOpenAI
67+
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
68+
69+
token_provider = get_bearer_token_provider(
70+
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
71+
)
72+
73+
client = AzureOpenAI(
74+
azure_endpoint = "https://<resource>.services.ai.azure.com"
75+
azure_ad_token_provider=token_provider,
76+
api_version="2024-10-21",
77+
)
78+
```
79+
80+
# [Model Inference (preview)](#tab/inference)
81+
4982
```python
5083
import os
5184
from azure.ai.inference import ChatCompletionsClient
@@ -58,13 +91,29 @@ client = ChatCompletionsClient(
5891
model="deepseek-r1"
5992
)
6093
```
94+
---
6195

6296
[!INCLUDE [best-practices](best-practices.md)]
6397

6498
### Create a chat completion request
6599

66100
The following example shows how you can create a basic chat request to the model.
67101

102+
# [OpenAI](#tab/openai)
103+
104+
```python
105+
response = client.chat.completions.create(
106+
model="deepseek-r1",
107+
messages=[
108+
{"role": "user", "content": "How many languages are in the world?"}
109+
]
110+
)
111+
112+
print(response.model_dump_json(indent=2)
113+
```
114+
115+
# [Model Inference (preview)](#tab/inference)
116+
68117
```python
69118
from azure.ai.inference.models import SystemMessage, UserMessage
70119

@@ -74,11 +123,14 @@ response = client.complete(
74123
],
75124
)
76125
```
126+
---
77127

78128
[!INCLUDE [best-practices](best-practices.md)]
79129

80130
The response is as follows, where you can see the model's usage statistics:
81131

132+
# [OpenAI](#tab/openai)
133+
82134
```python
83135
print("Response:", response.choices[0].message.content)
84136
print("Model:", response.model)
@@ -89,47 +141,71 @@ print("\tCompletion tokens:", response.usage.completion_tokens)
89141
```
90142

91143
```console
92-
Response: <think>Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer...</think>As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
144+
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
93145
Model: deepseek-r1
94146
Usage:
95147
Prompt tokens: 11
96148
Total tokens: 897
97149
Completion tokens: 886
98150
```
99151

152+
# [Model Inference (preview)](#tab/inference)
153+
154+
```python
155+
print("Response:", response.choices[0].message.content)
156+
print("Model:", response.model)
157+
print("Usage:")
158+
print("\tPrompt tokens:", response.usage.prompt_tokens)
159+
print("\tTotal tokens:", response.usage.total_tokens)
160+
print("\tCompletion tokens:", response.usage.completion_tokens)
161+
```
162+
163+
```console
164+
Response: <think>Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer...</think>As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
165+
Model: deepseek-r1
166+
Usage:
167+
Prompt tokens: 11
168+
Total tokens: 897
169+
Completion tokens: 886
170+
```
171+
---
100172

101173
### Reasoning content
102174

103-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content. You can extract the reasoning content from the response to understand the model's thought process as follows:
175+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it.
176+
177+
# [OpenAI](#tab/openai)
178+
179+
The reasoning associated with the completion is included in the field `reasoning_content`. The model may select on which scenearios to generate reasoning content.
180+
181+
```python
182+
```python
183+
print("Thinking:", response.choices[0].message.reasoning_content)
184+
```
185+
186+
```console
187+
Thinking: Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer...
188+
```
189+
190+
# [Model Inference (preview)](#tab/inference)
191+
192+
The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content. You can extract the reasoning content from the response to understand the model's thought process as follows:
104193

105194
```python
106195
import re
107196

108197
match = re.match(r"<think>(.*?)</think>(.*)", response.choices[0].message.content, re.DOTALL)
109198

110-
print("Response:", )
111199
if match:
112200
print("\tThinking:", match.group(1))
113-
print("\tAnswer:", match.group(2))
114201
else:
115202
print("\tAnswer:", response.choices[0].message.content)
116-
print("Model:", response.model)
117-
print("Usage:")
118-
print("\tPrompt tokens:", response.usage.prompt_tokens)
119-
print("\tTotal tokens:", response.usage.total_tokens)
120-
print("\tCompletion tokens:", response.usage.completion_tokens)
121203
```
122204

123205
```console
124-
Thinking: Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer. Let's start by recalling the general consensus from linguistic sources. I remember that the number often cited is around 7,000, but maybe I should check some reputable organizations.\n\nEthnologue is a well-known resource for language data, and I think they list about 7,000 languages. But wait, do they update their numbers? It might be around 7,100 or so. Also, the exact count can vary because some sources might categorize dialects differently or have more recent data. \n\nAnother thing to consider is language endangerment. Many languages are endangered, with some having only a few speakers left. Organizations like UNESCO track endangered languages, so mentioning that adds context. Also, the distribution isn't even. Some countries have hundreds of languages, like Papua New Guinea with over 800, while others have just a few. \n\nA user might also wonder why the exact number is hard to pin down. It's because the distinction between a language and a dialect can be political or cultural. For example, Mandarin and Cantonese are considered dialects of Chinese by some, but they're mutually unintelligible, so others classify them as separate languages. Also, some regions are under-researched, making it hard to document all languages. \n\nI should also touch on language families. The 7,000 languages are grouped into families like Indo-European, Sino-Tibetan, Niger-Congo, etc. Maybe mention a few of the largest families. But wait, the question is just about the count, not the families. Still, it's good to provide a bit more context. \n\nI need to make sure the information is up-to-date. Let me think – recent estimates still hover around 7,000. However, languages are dying out rapidly, so the number decreases over time. Including that note about endangerment and language extinction rates could be helpful. For instance, it's often stated that a language dies every few weeks. \n\nAnother point is sign languages. Does the count include them? Ethnologue includes some, but not all sources might. If the user is including sign languages, that adds more to the count, but I think the 7,000 figure typically refers to spoken languages. For thoroughness, maybe mention that there are also over 300 sign languages. \n\nSummarizing, the answer should state around 7,000, mention Ethnologue's figure, explain why the exact number varies, touch on endangerment, and possibly note sign languages as a separate category. Also, a brief mention of Papua New Guinea as the most linguistically diverse country. \n\nWait, let me verify Ethnologue's current number. As of their latest edition (25th, 2022), they list 7,168 living languages. But I should check if that's the case. Some sources might round to 7,000. Also, SIL International publishes Ethnologue, so citing them as reference makes sense. \n\nOther sources, like Glottolog, might have a different count because they use different criteria. Glottolog might list around 7,000 as well, but exact numbers vary. It's important to highlight that the count isn't exact because of differing definitions and ongoing research. \n\nIn conclusion, the approximate number is 7,000, with Ethnologue being a key source, considerations of endangerment, and the challenges in counting due to dialect vs. language distinctions. I should make sure the answer is clear, acknowledges the variability, and provides key points succinctly.
125-
126-
Answer: The exact number of languages in the world is challenging to determine due to differences in definitions (e.g., distinguishing languages from dialects) and ongoing documentation efforts. However, widely cited estimates suggest there are approximately **7,000 languages** globally.
127-
Model: DeepSeek-R1
128-
Usage:
129-
Prompt tokens: 11
130-
Total tokens: 897
131-
Completion tokens: 886
206+
Thinking: Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer. Let's start...
132207
```
208+
---
133209

134210
When making multi-turn conversations, it's useful to avoid sending the reasoning content in the chat history as reasoning tends to generate long explanations.
135211

@@ -141,6 +217,19 @@ You can _stream_ the content to get it as it's being generated. Streaming conten
141217

142218
To stream completions, set `stream=True` when you call the model.
143219

220+
# [OpenAI](#tab/openai)
221+
222+
```python
223+
response = client.chat.completions.create(
224+
model="deepseek-r1",
225+
messages=[
226+
{"role": "user", "content": "How many languages are in the world?"}
227+
],
228+
stream=True
229+
)
230+
```
231+
232+
# [Model Inference (preview)](#tab/inference)
144233

145234
```python
146235
response = client.complete(
@@ -152,9 +241,34 @@ response = client.complete(
152241
stream=True,
153242
)
154243
```
244+
---
155245

156246
To visualize the output, define a helper function to print the stream. The following example implements a routing that stream only the answer without the reasoning content:
157247

248+
# [OpenAI](#tab/openai)
249+
250+
```python
251+
def print_stream(completion):
252+
"""
253+
Prints the chat completion with streaming.
254+
"""
255+
is_thinking = False
256+
for event in completion:
257+
if event.choices:
258+
content = event.choices[0].delta.content
259+
reasoning_content = event.choices[0].delta.reasoning_content
260+
if reasoning_content:
261+
is_thinking = True
262+
print("🧠 Thinking...", end="", flush=True)
263+
elif content:
264+
if is_thinking:
265+
is_thinking = False
266+
print("🛑\n\n")
267+
print(content, end="", flush=True)
268+
```
269+
270+
# [Model Inference (preview)](#tab/inference)
271+
158272
```python
159273
def print_stream(completion):
160274
"""
@@ -173,6 +287,7 @@ def print_stream(completion):
173287
elif content:
174288
print(content, end="", flush=True)
175289
```
290+
---
176291

177292
You can visualize how streaming generates content:
178293

@@ -198,6 +313,10 @@ The Azure AI Model Inference API supports [Azure AI Content Safety](https://aka.
198313

199314
The following example shows how to handle events when the model detects harmful content in the input prompt.
200315

316+
# [OpenAI](#tab/openai)
317+
318+
# [Model Inference (preview)](#tab/inference)
319+
201320
```python
202321
from azure.ai.inference.models import AssistantMessage, UserMessage
203322

@@ -220,6 +339,7 @@ except HttpResponseError as ex:
220339
raise
221340
raise
222341
```
342+
---
223343

224344
> [!TIP]
225345
> To learn more about how you can configure and control Azure AI Content Safety settings, check the [Azure AI Content Safety documentation](https://aka.ms/azureaicontentsafety).

0 commit comments

Comments
 (0)