You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
46
65
47
66
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
48
67
68
+
# [API version 2025-04-01](#tab/2025-04-01)
69
+
49
70
```python
50
71
import os
51
72
from azure.ai.inference import ChatCompletionsClient
@@ -54,11 +75,28 @@ from azure.identity import DefaultAzureCredential
Response: As of now, it's estimated that there are about 7,000 languages spoken around the world. However, this number can vary as some languages become extinct and new ones develop. It's also important to note that the number of speakers can greatly vary between languages, with some having millions of speakers and others only a few hundred.
131
+
Model: deepseek-r1
132
+
Usage:
133
+
Prompt tokens: 11
134
+
Total tokens: 897
135
+
Completion tokens: 886
136
+
```
137
+
138
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content. You can extract the reasoning content from the response to understand the model's thought process as follows:
162
+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it.
163
+
164
+
# [API version 2025-04-01](#tab/2025-04-01)
165
+
166
+
The reasoning associated with the completion is included in the response's `reasoning_content` field. The model may select on which scenarios to generate reasoning content.
Thinking: Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer...
174
+
```
175
+
176
+
177
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
178
+
179
+
The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content. You can extract the reasoning content from the response to understand the model's thought process as follows:
102
180
103
181
```python
104
182
import re
@@ -129,6 +207,8 @@ Usage:
129
207
Completion tokens: 886
130
208
```
131
209
210
+
---
211
+
132
212
When making multi-turn conversations, it's useful to avoid sending the reasoning content in the chat history as reasoning tends to generate long explanations.
133
213
134
214
### Stream content
@@ -139,7 +219,6 @@ You can _stream_ the content to get it as it's being generated. Streaming conten
139
219
140
220
To stream completions, set `stream=True` when you call the model.
141
221
142
-
143
222
```python
144
223
result = client.complete(
145
224
model="deepseek-r1",
@@ -153,6 +232,31 @@ result = client.complete(
153
232
154
233
To visualize the output, define a helper function to print the stream. The following example implements a routing that stream only the answer without the reasoning content:
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/includes/use-chat-reasoning/rest.md
+121-5Lines changed: 121 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,23 +27,47 @@ To complete this tutorial, you need:
27
27
28
28
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
29
29
30
+
# [API version 2025-04-01](#tab/2025-04-01)
31
+
32
+
```http
33
+
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2025-04-01
34
+
Content-Type: application/json
35
+
api-key: <key>
36
+
```
37
+
38
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
39
+
30
40
```http
31
41
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
32
42
Content-Type: application/json
33
43
api-key: <key>
34
44
```
35
45
46
+
---
47
+
36
48
> [!TIP]
37
49
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
38
50
39
51
If you have configured the resource with **Microsoft Entra ID** support, pass you token in the `Authorization` header with the format `Bearer <token>`. Use scope `https://cognitiveservices.azure.com/.default`.
40
52
53
+
# [API version 2025-04-01](#tab/2025-04-01)
54
+
41
55
```http
42
56
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
43
57
Content-Type: application/json
44
58
Authorization: Bearer <token>
45
59
```
46
60
61
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
62
+
63
+
```http
64
+
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2025-04-01
65
+
Content-Type: application/json
66
+
Authorization: Bearer <token>
67
+
```
68
+
69
+
---
70
+
47
71
Using Microsoft Entra ID may require additional configuration in your resource to grant access. Learn how to [configure key-less authentication with Microsoft Entra ID](../../how-to/configure-entra-id.md).
48
72
49
73
### Create a chat completion request
@@ -66,6 +90,40 @@ The following example shows how you can create a basic chat request to the model
66
90
67
91
The response is as follows, where you can see the model's usage statistics:
68
92
93
+
# [API version 2025-04-01](#tab/2025-04-01)
94
+
95
+
The reasoning associated with the completion is included in the response's `reasoning_content` field. The model may select on which scenarios to generate reasoning content.
96
+
97
+
```json
98
+
{
99
+
"id": "0a1234b5de6789f01gh2i345j6789klm",
100
+
"object": "chat.completion",
101
+
"created": 1718726686,
102
+
"model": "DeepSeek-R1",
103
+
"choices": [
104
+
{
105
+
"index": 0,
106
+
"message": {
107
+
"role": "assistant",
108
+
"content": "The exact number of languages in the world is challenging to determine due to differences in definitions (e.g., distinguishing languages from dialects) and ongoing documentation efforts. However, widely cited estimates suggest there are approximately **7,000 languages** globally.",
109
+
"reasoning_content": "Okay, the user is asking how many languages exist in the world. I need to provide a clear and accurate answer. Let's start by recalling the general consensus from linguistic sources. I remember that the number often cited is around 7,000, but maybe I should check some reputable organizations.\n\nEthnologue is a well-known resource for language data, and I think they list about 7,000 languages. But wait, do they update their numbers? It might be around 7,100 or so. Also, the exact count can vary because some sources might categorize dialects differently or have more recent data. \n\nAnother thing to consider is language endangerment. Many languages are endangered, with some having only a few speakers left. Organizations like UNESCO track endangered languages, so mentioning that adds context. Also, the distribution isn't even. Some countries have hundreds of languages, like Papua New Guinea with over 800, while others have just a few. \n\nA user might also wonder why the exact number is hard to pin down. It's because the distinction between a language and a dialect can be political or cultural. For example, Mandarin and Cantonese are considered dialects of Chinese by some, but they're mutually unintelligible, so others classify them as separate languages. Also, some regions are under-researched, making it hard to document all languages. \n\nI should also touch on language families. The 7,000 languages are grouped into families like Indo-European, Sino-Tibetan, Niger-Congo, etc. Maybe mention a few of the largest families. But wait, the question is just about the count, not the families. Still, it's good to provide a bit more context. \n\nI need to make sure the information is up-to-date. Let me think – recent estimates still hover around 7,000. However, languages are dying out rapidly, so the number decreases over time. Including that note about endangerment and language extinction rates could be helpful. For instance, it's often stated that a language dies every few weeks. \n\nAnother point is sign languages. Does the count include them? Ethnologue includes some, but not all sources might. If the user is including sign languages, that adds more to the count, but I think the 7,000 figure typically refers to spoken languages. For thoroughness, maybe mention that there are also over 300 sign languages. \n\nSummarizing, the answer should state around 7,000, mention Ethnologue's figure, explain why the exact number varies, touch on endangerment, and possibly note sign languages as a separate category. Also, a brief mention of Papua New Guinea as the most linguistically diverse country. \n\nWait, let me verify Ethnologue's current number. As of their latest edition (25th, 2022), they list 7,168 living languages. But I should check if that's the case. Some sources might round to 7,000. Also, SIL International publishes Ethnologue, so citing them as reference makes sense. \n\nOther sources, like Glottolog, might have a different count because they use different criteria. Glottolog might list around 7,000 as well, but exact numbers vary. It's important to highlight that the count isn't exact because of differing definitions and ongoing research. \n\nIn conclusion, the approximate number is 7,000, with Ethnologue being a key source, considerations of endangerment, and the challenges in counting due to dialect vs. language distinctions. I should make sure the answer is clear, acknowledges the variability, and provides key points succinctly.",
110
+
"tool_calls": null
111
+
},
112
+
"finish_reason": "stop"
113
+
}
114
+
],
115
+
"usage": {
116
+
"prompt_tokens": 11,
117
+
"total_tokens": 897,
118
+
"completion_tokens": 886
119
+
}
120
+
}
121
+
```
122
+
123
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
124
+
125
+
The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content.
126
+
69
127
```json
70
128
{
71
129
"id": "0a1234b5de6789f01gh2i345j6789klm",
@@ -91,11 +149,11 @@ The response is as follows, where you can see the model's usage statistics:
91
149
}
92
150
```
93
151
94
-
### Reasoning content
152
+
---
95
153
96
-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model may select on which scenarios to generate reasoning content.
154
+
### Reasoning content
97
155
98
-
When making multi-turn conversations, it's useful to avoid sending the reasoning content in the chat history as reasoning tends to generate long explanations.
156
+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. When making multi-turn conversations, it's useful to avoid sending the reasoning content in the chat history as reasoning tends to generate long explanations.
99
157
100
158
### Stream content
101
159
@@ -105,7 +163,6 @@ You can _stream_ the content to get it as it's being generated. Streaming conten
105
163
106
164
To stream completions, set `"stream": true` when you call the model.
107
165
108
-
109
166
```json
110
167
{
111
168
"model": "DeepSeek-R1",
@@ -124,7 +181,32 @@ To stream completions, set `"stream": true` when you call the model.
124
181
}
125
182
```
126
183
127
-
To visualize the output, define a helper function to print the stream. The following example implements a routing that stream only the answer without the reasoning content:
184
+
The output looks as follows:
185
+
186
+
# [API version 2025-04-01](#tab/2025-04-01)
187
+
188
+
```json
189
+
{
190
+
"id": "23b54589eba14564ad8a2e6978775a39",
191
+
"object": "chat.completion.chunk",
192
+
"created": 1718726371,
193
+
"model": "DeepSeek-R1",
194
+
"choices": [
195
+
{
196
+
"index": 0,
197
+
"delta": {
198
+
"role": "assistant",
199
+
"content": "",
200
+
"reasoning_content": "",
201
+
},
202
+
"finish_reason": null,
203
+
"logprobs": null
204
+
}
205
+
]
206
+
}
207
+
```
208
+
209
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
128
210
129
211
```json
130
212
{
@@ -146,8 +228,40 @@ To visualize the output, define a helper function to print the stream. The follo
146
228
}
147
229
```
148
230
231
+
---
232
+
149
233
The last message in the stream has `finish_reason` set, indicating the reason for the generation process to stop.
150
234
235
+
# [API version 2025-04-01](#tab/2025-04-01)
236
+
237
+
```json
238
+
{
239
+
"id": "23b54589eba14564ad8a2e6978775a39",
240
+
"object": "chat.completion.chunk",
241
+
"created": 1718726371,
242
+
"model": "DeepSeek-R1",
243
+
"choices": [
244
+
{
245
+
"index": 0,
246
+
"delta": {
247
+
"content": "",
248
+
"reasoning_content": ""
249
+
},
250
+
"finish_reason": "stop",
251
+
"logprobs": null
252
+
}
253
+
],
254
+
"usage": {
255
+
"prompt_tokens": 11,
256
+
"total_tokens": 897,
257
+
"completion_tokens": 886
258
+
}
259
+
}
260
+
```
261
+
262
+
263
+
# [API version 2024-05-01-preview](#tab/2024-05-01-preview)
264
+
151
265
```json
152
266
{
153
267
"id": "23b54589eba14564ad8a2e6978775a39",
@@ -172,6 +286,8 @@ The last message in the stream has `finish_reason` set, indicating the reason fo
172
286
}
173
287
```
174
288
289
+
---
290
+
175
291
### Parameters
176
292
177
293
In general, reasoning models don't support the following parameters you can find in chat completion models:
0 commit comments