Skip to content

Commit b72857e

Browse files
Merge pull request #7207 from msakande/freshness-deepseek-tuto
freshness for deepseek tutorial
2 parents 83f4d7d + 68db964 commit b72857e

File tree

3 files changed

+27
-34
lines changed

3 files changed

+27
-34
lines changed
-253 KB
Binary file not shown.
-54.6 KB
Loading

articles/ai-foundry/foundry-models/tutorials/get-started-deepseek-r1.md

Lines changed: 27 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,33 @@
11
---
22
title: "Tutorial: Getting started with DeepSeek-R1 reasoning model in Azure AI Foundry Models"
33
titleSuffix: Azure AI Foundry
4-
description: Learn about the reasoning capabilities of DeepSeek-R1 in Azure AI Foundry Models.
4+
description: Learn how to deploy and use DeepSeek-R1 reasoning model in Azure AI Foundry Models with step-by-step guidance and code examples.
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
77
ms.topic: tutorial
8-
ms.date: 06/26/2025
9-
ms.reviewer: fasantia
8+
ms.date: 09/25/2025
109
ms.author: mopeakande
1110
author: msakande
11+
#CustomerIntent: As a developer or data scientist, I want to learn how to deploy and use the DeepSeek-R1 reasoning model in Azure AI Foundry Models so that I can build applications that leverage advanced reasoning capabilities for complex problem-solving tasks.
1212
---
1313

1414
# Tutorial: Get started with DeepSeek-R1 reasoning model in Azure AI Foundry Models
1515

16-
In this tutorial, you learn:
16+
In this tutorial, you learn how to deploy and use a DeepSeek reasoning model in Azure AI Foundry. This tutorial uses [DeepSeek-R1](https://ai.azure.com/explore/models/deepseek-r1/version/1/registry/azureml-deepseek?cid=learnDocs) for illustration. However, the content also applies to the newer [DeepSeek-R1-0528](https://ai.azure.com/explore/models/deepseek-r1-0528/version/1/registry/azureml-deepseek?cid=learnDocs) reasoning model.
1717

18-
* How to create and configure the Azure resources to use DeepSeek-R1 in Azure AI Foundry Models.
19-
* How to configure the model deployment.
20-
* How to use DeepSeek-R1 with the Azure AI Inference SDK or REST APIs.
21-
* How to use DeepSeek-R1 with other SDKs.
18+
The steps you perform in this tutorial are:
19+
20+
* Create and configure the Azure resources to use DeepSeek-R1 in Azure AI Foundry Models.
21+
* Configure the model deployment.
22+
* Use DeepSeek-R1 with the Azure AI Inference SDK or REST APIs.
23+
* Use DeepSeek-R1 with other SDKs.
2224

2325
## Prerequisites
2426

2527
To complete this article, you need:
2628

2729

28-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md), if that applies to you.
29-
30+
- An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md), if that applies to you.
3031

3132
[!INCLUDE [about-reasoning](../../foundry-models/includes/use-chat-reasoning/about-reasoning.md)]
3233

@@ -35,17 +36,13 @@ To complete this article, you need:
3536

3637
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI hubs and projects in Azure AI Foundry to create intelligent applications if needed.
3738

38-
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources, using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
39+
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources by using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
3940

4041

4142
[!INCLUDE [tip-left-pane](../../includes/tip-left-pane.md)]
4243

4344
1. Sign in to [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
4445

45-
1. Go to the preview features icon on the header of the landing page and make sure that the **Deploy models to Azure AI Foundry resources** feature is turned on.
46-
47-
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png" alt-text="A screenshot showing the steps to enable deployment to a Foundry resource." lightbox="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png":::
48-
4946
1. On the landing page, go to the "Explore models and capabilities" section and select **Go to full model catalog** to open the model catalog.
5047
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png" alt-text="A screenshot of the homepage of the Foundry portal showing the model catalog section." lightbox="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png":::
5148

@@ -60,50 +57,46 @@ To create an Azure AI project that supports deployment for DeepSeek-R1, follow t
6057
>
6158
> A new window shows up with the full list of models. Select **DeepSeek-R1** from the list and select **Deploy**. The wizard asks to create a new project.
6259
63-
1. Select the dropdown in the "Advanced options" section of the wizard to see the details of other defaults created alongside the project. These defaults are selected for optimal functionality and include:
60+
1. Select the dropdown in the "Advanced options" section of the wizard to see details about settings and other defaults created alongside the project. These defaults are selected for optimal functionality and include:
6461

6562
| Property | Description |
6663
| -------------- | ----------- |
67-
| Resource group | The main container for all the resources in Azure. This helps get resources that work together organized. It also helps to have a scope for the costs associated with the entire project. |
64+
| Resource group | The main container for all the resources in Azure. This container helps you organize resources that work together. It also helps to have a scope for the costs associated with the entire project. |
6865
| Region | The region of the resources that you're creating. |
69-
| AI Foundry resource | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known as Azure AI Services resource) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models, see [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
66+
| Azure AI Foundry resource | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known as Azure AI Services resource) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models, see [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
7067

7168
1. Select **Create** to create the Foundry project alongside the other defaults. Wait until the project creation is complete. This process takes a few minutes.
7269

7370
## Deploy the model
7471

75-
1. Once the project and resources are created, a deployment wizard appears. DeepSeek-R1 is offered as a Microsoft first party consumption service. You can review our privacy and security commitments under [Data, privacy, and Security](../../../ai-studio/how-to/concept-data-privacy.md).
76-
77-
1. Review the pricing details for the model by selecting the [Pricing and terms tab](https://aka.ms/DeepSeekPricing).
78-
79-
1. Select **Agree and Proceed** to continue with the deployment.
72+
1. When you create the project and resources, a deployment wizard appears. DeepSeek-R1 is available as a Foundry Model sold directly by Azure. You can review the pricing details for the model by selecting the DeepSeek tab on the [Azure AI Foundry Models pricing page](https://azure.microsoft.com/pricing/details/phi-3/).
8073

81-
1. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for requests to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations.
74+
1. Configure the deployment settings. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for requests to route to this particular model deployment. This setup allows you to configure specific names for your models when you attach specific configurations.
8275

83-
1. Azure AI Foundry automatically selects the Foundry resource created earlier with your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type, which offers higher throughput and performance.
76+
1. Azure AI Foundry automatically selects the Foundry resource you created earlier with your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type, which offers higher throughput and performance.
8477

8578
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/deployment-wizard.png" alt-text="Screenshot showing how to deploy the model." lightbox="../media/quickstart-get-started-deepseek-r1/deployment-wizard.png":::
8679

8780
1. Select **Deploy**.
8881

89-
1. Once the deployment completes, the deployment **Details** page opens up. Now the new model is ready to be used.
82+
1. When the deployment completes, the deployment **Details** page opens. Now the new model is ready for use.
9083

9184

9285
## Use the model in the playground
9386

94-
You can get started by using the model in the playground to have an idea of the model's capabilities.
87+
You can get started by using the model in the playground to get an idea of the model's capabilities.
9588

9689
1. On the deployment details page, select **Open in playground** in the top bar. This action opens the chat playground.
9790

98-
2. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
91+
1. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
9992

100-
3. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
93+
1. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
10194

10295
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png" alt-text="Screenshot showing how to select a model deployment to use in playground, configure the system message, and test it out." lightbox="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png":::
10396

104-
4. Type your prompt and see the outputs.
97+
1. Type your prompt and see the outputs.
10598

106-
5. Additionally, you can use **View code** to see details about how to access the model deployment programmatically.
99+
1. Use **View code** to see details about how to access the model deployment programmatically.
107100

108101
[!INCLUDE [best-practices](../../foundry-models/includes/use-chat-reasoning/best-practices.md)]
109102

@@ -114,17 +107,17 @@ Use the Foundry Models endpoint and credentials to connect to the model:
114107
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png" alt-text="Screenshot showing how to get the URL and key associated with the deployment." lightbox="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png":::
115108

116109

117-
You can use the Azure AI Model Inference package to consume the model in code:
110+
Use the Azure AI Model Inference package to consume the model in your code:
118111

119112
[!INCLUDE [code-create-chat-client](../../foundry-models/includes/code-create-chat-client.md)]
120113

121114
[!INCLUDE [code-chat-reasoning](../../foundry-models/includes/code-create-chat-reasoning.md)]
122115

123-
Reasoning might generate longer responses and consume a larger number of tokens. You can see the [rate limits](../../model-inference/quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits being applied. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
116+
Reasoning might generate longer responses and consume a larger number of tokens. You can see the [rate limits](../../model-inference/quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
124117

125118
### Reasoning content
126119

127-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
120+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind them. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
128121

129122
```python
130123
import re

0 commit comments

Comments
 (0)