You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/tutorials/get-started-deepseek-r1.md
+27-34Lines changed: 27 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,32 +1,33 @@
1
1
---
2
2
title: "Tutorial: Getting started with DeepSeek-R1 reasoning model in Azure AI Foundry Models"
3
3
titleSuffix: Azure AI Foundry
4
-
description: Learn about the reasoning capabilities of DeepSeek-R1 in Azure AI Foundry Models.
4
+
description: Learn how to deploy and use DeepSeek-R1 reasoning model in Azure AI Foundry Models with step-by-step guidance and code examples.
5
5
ms.service: azure-ai-foundry
6
6
ms.subservice: azure-ai-foundry-model-inference
7
7
ms.topic: tutorial
8
-
ms.date: 06/26/2025
9
-
ms.reviewer: fasantia
8
+
ms.date: 09/25/2025
10
9
ms.author: mopeakande
11
10
author: msakande
11
+
#CustomerIntent: As a developer or data scientist, I want to learn how to deploy and use the DeepSeek-R1 reasoning model in Azure AI Foundry Models so that I can build applications that leverage advanced reasoning capabilities for complex problem-solving tasks.
12
12
---
13
13
14
14
# Tutorial: Get started with DeepSeek-R1 reasoning model in Azure AI Foundry Models
15
15
16
-
In this tutorial, you learn:
16
+
In this tutorial, you learn how to deploy and use a DeepSeek reasoning model in Azure AI Foundry. This tutorial uses [DeepSeek-R1](https://ai.azure.com/explore/models/deepseek-r1/version/1/registry/azureml-deepseek?cid=learnDocs) for illustration. However, the content also applies to the newer [DeepSeek-R1-0528](https://ai.azure.com/explore/models/deepseek-r1-0528/version/1/registry/azureml-deepseek?cid=learnDocs) reasoning model.
17
17
18
-
* How to create and configure the Azure resources to use DeepSeek-R1 in Azure AI Foundry Models.
19
-
* How to configure the model deployment.
20
-
* How to use DeepSeek-R1 with the Azure AI Inference SDK or REST APIs.
21
-
* How to use DeepSeek-R1 with other SDKs.
18
+
The steps you perform in this tutorial are:
19
+
20
+
* Create and configure the Azure resources to use DeepSeek-R1 in Azure AI Foundry Models.
21
+
* Configure the model deployment.
22
+
* Use DeepSeek-R1 with the Azure AI Inference SDK or REST APIs.
23
+
* Use DeepSeek-R1 with other SDKs.
22
24
23
25
## Prerequisites
24
26
25
27
To complete this article, you need:
26
28
27
29
28
-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md), if that applies to you.
29
-
30
+
- An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md), if that applies to you.
@@ -35,17 +36,13 @@ To complete this article, you need:
35
36
36
37
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI hubs and projects in Azure AI Foundry to create intelligent applications if needed.
37
38
38
-
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources, using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
39
+
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources by using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
1. Sign in to [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
44
45
45
-
1. Go to the preview features icon on the header of the landing page and make sure that the **Deploy models to Azure AI Foundry resources** feature is turned on.
46
-
47
-
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png" alt-text="A screenshot showing the steps to enable deployment to a Foundry resource." lightbox="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png":::
48
-
49
46
1. On the landing page, go to the "Explore models and capabilities" section and select **Go to full model catalog** to open the model catalog.
50
47
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png" alt-text="A screenshot of the homepage of the Foundry portal showing the model catalog section." lightbox="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png":::
51
48
@@ -60,50 +57,46 @@ To create an Azure AI project that supports deployment for DeepSeek-R1, follow t
60
57
>
61
58
> A new window shows up with the full list of models. Select **DeepSeek-R1** from the list and select **Deploy**. The wizard asks to create a new project.
62
59
63
-
1. Select the dropdown in the "Advanced options" section of the wizard to see the details of other defaults created alongside the project. These defaults are selected for optimal functionality and include:
60
+
1. Select the dropdown in the "Advanced options" section of the wizard to see details about settings and other defaults created alongside the project. These defaults are selected for optimal functionality and include:
64
61
65
62
| Property | Description |
66
63
| -------------- | ----------- |
67
-
| Resource group | The main container for all the resources in Azure. This helps get resources that work together organized. It also helps to have a scope for the costs associated with the entire project. |
64
+
| Resource group | The main container for all the resources in Azure. This container helps you organize resources that work together. It also helps to have a scope for the costs associated with the entire project. |
68
65
| Region | The region of the resources that you're creating. |
69
-
| AI Foundry resource | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known as Azure AI Services resource) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models, see [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
66
+
|Azure AI Foundry resource | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known as Azure AI Services resource) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models, see [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
70
67
71
68
1. Select **Create** to create the Foundry project alongside the other defaults. Wait until the project creation is complete. This process takes a few minutes.
72
69
73
70
## Deploy the model
74
71
75
-
1. Once the project and resources are created, a deployment wizard appears. DeepSeek-R1 is offered as a Microsoft first party consumption service. You can review our privacy and security commitments under [Data, privacy, and Security](../../../ai-studio/how-to/concept-data-privacy.md).
76
-
77
-
1. Review the pricing details for the model by selecting the [Pricing and terms tab](https://aka.ms/DeepSeekPricing).
78
-
79
-
1. Select **Agree and Proceed** to continue with the deployment.
72
+
1. When you create the project and resources, a deployment wizard appears. DeepSeek-R1 is available as a Foundry Model sold directly by Azure. You can review the pricing details for the model by selecting the DeepSeek tab on the [Azure AI Foundry Models pricing page](https://azure.microsoft.com/pricing/details/phi-3/).
80
73
81
-
1.You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for requests to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations.
74
+
1.Configure the deployment settings. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for requests to route to this particular model deployment. This setup allows you to configure specific names for your models when you attach specific configurations.
82
75
83
-
1. Azure AI Foundry automatically selects the Foundry resource created earlier with your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type, which offers higher throughput and performance.
76
+
1. Azure AI Foundry automatically selects the Foundry resource you created earlier with your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type, which offers higher throughput and performance.
84
77
85
78
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/deployment-wizard.png" alt-text="Screenshot showing how to deploy the model." lightbox="../media/quickstart-get-started-deepseek-r1/deployment-wizard.png":::
86
79
87
80
1. Select **Deploy**.
88
81
89
-
1.Once the deployment completes, the deployment **Details** page opens up. Now the new model is ready to be used.
82
+
1.When the deployment completes, the deployment **Details** page opens. Now the new model is ready for use.
90
83
91
84
92
85
## Use the model in the playground
93
86
94
-
You can get started by using the model in the playground to have an idea of the model's capabilities.
87
+
You can get started by using the model in the playground to get an idea of the model's capabilities.
95
88
96
89
1. On the deployment details page, select **Open in playground** in the top bar. This action opens the chat playground.
97
90
98
-
2. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
91
+
1. In the **Deployment** drop down of the chat playground, the deployment you created is already automatically selected.
99
92
100
-
3. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
93
+
1. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way as other types of models.
101
94
102
95
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png" alt-text="Screenshot showing how to select a model deployment to use in playground, configure the system message, and test it out." lightbox="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png":::
103
96
104
-
4. Type your prompt and see the outputs.
97
+
1. Type your prompt and see the outputs.
105
98
106
-
5. Additionally, you can use**View code** to see details about how to access the model deployment programmatically.
99
+
1. Use**View code** to see details about how to access the model deployment programmatically.
@@ -114,17 +107,17 @@ Use the Foundry Models endpoint and credentials to connect to the model:
114
107
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png" alt-text="Screenshot showing how to get the URL and key associated with the deployment." lightbox="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png":::
115
108
116
109
117
-
You can use the Azure AI Model Inference package to consume the model in code:
110
+
Use the Azure AI Model Inference package to consume the model in your code:
Reasoning might generate longer responses and consume a larger number of tokens. You can see the [rate limits](../../model-inference/quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits being applied. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
116
+
Reasoning might generate longer responses and consume a larger number of tokens. You can see the [rate limits](../../model-inference/quotas-limits.md) that apply to DeepSeek-R1 models. Consider having a retry strategy to handle rate limits. You can also [request increases to the default limits](../quotas-limits.md#request-increases-to-the-default-limits).
124
117
125
118
### Reasoning content
126
119
127
-
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind it. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
120
+
Some reasoning models, like DeepSeek-R1, generate completions and include the reasoning behind them. The reasoning associated with the completion is included in the response's content within the tags `<think>` and `</think>`. The model might select the scenarios for which to generate reasoning content. The following example shows how to generate the reasoning content, using Python:
0 commit comments