Skip to content

Commit f3e1b92

Browse files
authored
Update get-started-deepseek-r1.md
1 parent 60f56b2 commit f3e1b92

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/ai-foundry/model-inference/tutorials/get-started-deepseek-r1.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,11 +42,11 @@ To create an Azure AI project that supports model inference for DeepSeek-R1, fol
4242
2. On the landing page, select **Create project**.
4343

4444
> [!TIP]
45-
> **Are you using Azure OpenAI service?** When you are connected to Azure AI Foundry portal using an Azure OpenAI service resource, only Azure OpenAI models show up in the catalog. To view the full list of models, including DeepSeek-R1, use the top **Annoucements** section and locate the card with the option **Explore more models**.
45+
> **Are you using Azure OpenAI service?** When you are connected to Azure AI Foundry portal using an Azure OpenAI service resource, only Azure OpenAI models show up in the catalog. To view the full list of models, including DeepSeek-R1, use the top **Announcements** section and locate the card with the option **Explore more models**.
4646
>
4747
> :::image type="content" source="../media/quickstart-get-started-deepseek-r1/explore-more-models.png" alt-text="Screenshot showing the card with the option to explore all the models from the catalog." lightbox="../media/quickstart-get-started-deepseek-r1/explore-more-models.png":::
4848
>
49-
> A new windows shows up with the full list of models. Select **DeepSeek-R1** from the list and select **Deploy**. The wizard asks to create a new project.
49+
> A new window shows up with the full list of models. Select **DeepSeek-R1** from the list and select **Deploy**. The wizard asks to create a new project.
5050
5151
3. Give the project a name, for example "my-project".
5252

@@ -139,7 +139,7 @@ You can use the Azure AI Inference package to consume the model in code:
139139

140140
[!INCLUDE [code-chat-reasoning](../includes/code-create-chat-reasoning.md)]
141141

142-
Reasoning may generate longer responses and consume a larger amount of tokens. You can see the [rate limits](../quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits being applied. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
142+
Reasoning may generate longer responses and consume a larger number of tokens. You can see the [rate limits](../quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits being applied. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
143143

144144
### Reasoning content
145145

0 commit comments

Comments
 (0)