Skip to content

Commit b9e1f3e

Browse files
Merge pull request #451 from guygregory/patch-3
Fix grammar in about-reasoning.md (“models produces” → “models produce”) and propagate change to referencing articles
2 parents fc0b900 + 1598341 commit b9e1f3e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-foundry/model-inference/includes/use-chat-reasoning/about-reasoning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ author: santiagxf
99

1010
## Reasoning models
1111

12-
Reasoning models can reach higher levels of performance in domains like math, coding, science, strategy, and logistics. The way these models produces outputs is by explicitly using chain of thought to explore all possible paths before generating an answer. They verify their answers as they produce them which helps them to arrive to better more accurate conclusions. This means that reasoning models may require less context in prompting in order to produce effective results.
12+
Reasoning models can reach higher levels of performance in domains like math, coding, science, strategy, and logistics. The way these models produce outputs is by explicitly using chain of thought to explore all possible paths before generating an answer. They verify their answers as they produce them which helps them to arrive to better more accurate conclusions. This means that reasoning models may require less context in prompting in order to produce effective results.
1313

1414
Such way of scaling model's performance is referred as *inference compute time* as it trades performance against higher latency and cost. It contrasts to other approaches that scale through *training compute time*.
1515

@@ -19,4 +19,4 @@ Reasoning models then produce two types of outputs:
1919
> * Reasoning completions
2020
> * Output completions
2121
22-
Both of these completions count towards content generated from the model and hence, towards the token limits and costs associated with the model. Some models may output the reasoning content, like `DeepSeek-R1`. Some others, like `o1`, only outputs the output piece of the completions.
22+
Both of these completions count towards content generated from the model and hence, towards the token limits and costs associated with the model. Some models may output the reasoning content, like `DeepSeek-R1`. Some others, like `o1`, only outputs the output piece of the completions.

0 commit comments

Comments
 (0)