Skip to content

Commit c2a1018

Browse files
committed
edit
1 parent 8b792ef commit c2a1018

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

articles/ai-foundry/foundry-models/includes/use-chat-reasoning/about-reasoning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@ Reasoning models produce two types of content as outputs:
1818
* Reasoning completions
1919
* Output completions
2020

21-
Both of these completions count towards content generated from the model. Therefore, they contribute to the token limits and costs associated with the model. Some models, like `DeepSeek-R1`, might respond with the reasoning content. Others, like `o1`, only output the completions.
21+
Both of these completions count towards content generated from the model. Therefore, they contribute to the token limits and costs associated with the model. Some models, like `DeepSeek-R1`, might respond with the reasoning content. Others, like `o1`, output only the completions.

articles/ai-foundry/foundry-models/includes/use-chat-reasoning/best-practices.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ When building prompts for reasoning models, take the following into consideratio
1414
> [!div class="checklist"]
1515
> * Use simple instructions and avoid using chain-of-thought techniques.
1616
> * Built-in reasoning capabilities make simple zero-shot prompts as effective as more complex methods.
17-
> * When providing additional context or documents, like in RAG scenarios, including only the most relevant information may help preventing the model from over-complicating its response.
18-
> * Reasoning models may support the use of system messages. However, they may not follow them as strictly as other non-reasoning models.
19-
> * When creating multi-turn applications, consider only appending the final answer from the model, without it's reasoning content as explained at [Reasoning content](#reasoning-content) section.
17+
> * When providing additional context or documents, like in RAG scenarios, including only the most relevant information might help prevent the model from over-complicating its response.
18+
> * Reasoning models may support the use of system messages. However, they might not follow them as strictly as other non-reasoning models.
19+
> * When creating multi-turn applications, consider appending only the final answer from the model, without it's reasoning content, as explained in the [Reasoning content](#reasoning-content) section.
2020
21-
Notice that reasoning models can take longer times to generate responses. They use long reasoning chains of thought that enabled deeper and more structured problem-solving. They also perform self-verification to cross-check its own answers and correct its own mistakes, showcasing emergent self-reflective behaviors.
21+
Notice that reasoning models can take longer times to generate responses. They use long reasoning chains of thought that enable deeper and more structured problem-solving. They also perform self-verification to cross-check their answers and correct their mistakes, thereby showcasing emergent self-reflective behaviors.

articles/ai-foundry/foundry-models/tutorials/get-started-deepseek-r1.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ To complete this article, you need:
3333
## Create the resources
3434

3535

36-
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed.
36+
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI hubs and projects in Azure AI Foundry to create intelligent applications if needed.
3737

3838
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources, using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
3939

@@ -93,9 +93,9 @@ To create an Azure AI project that supports deployment for DeepSeek-R1, follow t
9393

9494
You can get started by using the model in the playground to have an idea of the model's capabilities.
9595

96-
1. On the deployment details page, select **Open in playground** in the top bar.
96+
1. On the deployment details page, select **Open in playground** in the top bar. This action opens the chat playground.
9797

98-
2. In the **Deployment** drop down, the deployment you created is already automatically selected.
98+
2. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
9999

100100
3. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
101101

@@ -124,7 +124,7 @@ Reasoning might generate longer responses and consume a larger number of tokens.
124124

125125
### Reasoning content
126126

127-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select which scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
127+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
128128

129129
```python
130130
import re
@@ -158,7 +158,7 @@ Usage:
158158

159159
### Parameters
160160

161-
In general, reasoning models don't support the following parameters you can find in chat completion models:
161+
In general, reasoning models don't support the following parameters that you can find in chat completion models:
162162

163163
* Temperature
164164
* Presence penalty

0 commit comments

Comments
 (0)