You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/includes/use-chat-reasoning/about-reasoning.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,4 +18,4 @@ Reasoning models produce two types of content as outputs:
18
18
* Reasoning completions
19
19
* Output completions
20
20
21
-
Both of these completions count towards content generated from the model. Therefore, they contribute to the token limits and costs associated with the model. Some models, like `DeepSeek-R1`, might respond with the reasoning content. Others, like `o1`, only output the completions.
21
+
Both of these completions count towards content generated from the model. Therefore, they contribute to the token limits and costs associated with the model. Some models, like `DeepSeek-R1`, might respond with the reasoning content. Others, like `o1`, output only the completions.
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/includes/use-chat-reasoning/best-practices.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,8 +14,8 @@ When building prompts for reasoning models, take the following into consideratio
14
14
> [!div class="checklist"]
15
15
> * Use simple instructions and avoid using chain-of-thought techniques.
16
16
> * Built-in reasoning capabilities make simple zero-shot prompts as effective as more complex methods.
17
-
> * When providing additional context or documents, like in RAG scenarios, including only the most relevant information may help preventing the model from over-complicating its response.
18
-
> * Reasoning models may support the use of system messages. However, they may not follow them as strictly as other non-reasoning models.
19
-
> * When creating multi-turn applications, consider only appending the final answer from the model, without it's reasoning content as explained at[Reasoning content](#reasoning-content) section.
17
+
> * When providing additional context or documents, like in RAG scenarios, including only the most relevant information might help prevent the model from over-complicating its response.
18
+
> * Reasoning models may support the use of system messages. However, they might not follow them as strictly as other non-reasoning models.
19
+
> * When creating multi-turn applications, consider appending only the final answer from the model, without it's reasoning content, as explained in the[Reasoning content](#reasoning-content) section.
20
20
21
-
Notice that reasoning models can take longer times to generate responses. They use long reasoning chains of thought that enabled deeper and more structured problem-solving. They also perform self-verification to cross-check its own answers and correct its own mistakes, showcasing emergent self-reflective behaviors.
21
+
Notice that reasoning models can take longer times to generate responses. They use long reasoning chains of thought that enable deeper and more structured problem-solving. They also perform self-verification to cross-check their answers and correct their mistakes, thereby showcasing emergent self-reflective behaviors.
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/tutorials/get-started-deepseek-r1.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ To complete this article, you need:
33
33
## Create the resources
34
34
35
35
36
-
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed.
36
+
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI hubs and projects in Azure AI Foundry to create intelligent applications if needed.
37
37
38
38
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources, using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
39
39
@@ -93,9 +93,9 @@ To create an Azure AI project that supports deployment for DeepSeek-R1, follow t
93
93
94
94
You can get started by using the model in the playground to have an idea of the model's capabilities.
95
95
96
-
1. On the deployment details page, select **Open in playground** in the top bar.
96
+
1. On the deployment details page, select **Open in playground** in the top bar. This action opens the chat playground.
97
97
98
-
2. In the **Deployment** drop down, the deployment you created is already automatically selected.
98
+
2. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
99
99
100
100
3. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
101
101
@@ -124,7 +124,7 @@ Reasoning might generate longer responses and consume a larger number of tokens.
124
124
125
125
### Reasoning content
126
126
127
-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select which scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
127
+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
128
128
129
129
```python
130
130
import re
@@ -158,7 +158,7 @@ Usage:
158
158
159
159
### Parameters
160
160
161
-
In general, reasoning models don't support the following parameters you can find in chat completion models:
161
+
In general, reasoning models don't support the following parameters that you can find in chat completion models:
0 commit comments