Skip to content

Commit 198275e

Browse files
committed
initial review of deep-seek-r1 tutorial
1 parent c6fc98c commit 198275e

File tree

8 files changed

+46
-72
lines changed

8 files changed

+46
-72
lines changed

articles/ai-foundry/foundry-models/includes/use-chat-reasoning/about-reasoning.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,13 @@ author: santiagxf
99

1010
## Reasoning models
1111

12-
Reasoning models can reach higher levels of performance in domains like math, coding, science, strategy, and logistics. The way these models produce outputs is by explicitly using chain of thought to explore all possible paths before generating an answer. They verify their answers as they produce them which helps them to arrive to better more accurate conclusions. This means that reasoning models may require less context in prompting in order to produce effective results.
12+
Reasoning models can reach higher levels of performance in domains like math, coding, science, strategy, and logistics. The way these models produce outputs is by explicitly using chain of thought to explore all possible paths before generating an answer. They verify their answers as they produce them, which helps to arrive at better, more accurate conclusions. As a result, reasoning models might require less context in prompting in order to produce effective results.
1313

14-
Such way of scaling model's performance is referred as *inference compute time* as it trades performance against higher latency and cost. It contrasts to other approaches that scale through *training compute time*.
14+
This way of scaling a model's performance is referred to as *inference compute time* as it trades performance against higher latency and cost. In contrast, other approaches might scale through *training compute time*.
1515

16-
Reasoning models then produce two types of outputs:
16+
Reasoning models produce two types of content as outputs:
1717

18-
> [!div class="checklist"]
19-
> * Reasoning completions
20-
> * Output completions
18+
* Reasoning completions
19+
* Output completions
2120

22-
Both of these completions count towards content generated from the model and hence, towards the token limits and costs associated with the model. Some models may output the reasoning content, like `DeepSeek-R1`. Some others, like `o1`, only outputs the output piece of the completions.
21+
Both of these completions count towards content generated from the model. Therefore, they contribute to the token limits and costs associated with the model. Some models, like `DeepSeek-R1`, might respond with the reasoning content. Others, like `o1`, respond only with the output completions.
62.9 KB
Loading
Loading
358 KB
Loading
-71.1 KB
Loading
Loading
-1.22 MB
Loading

articles/ai-foundry/foundry-models/tutorials/get-started-deepseek-r1.md

Lines changed: 40 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about the reasoning capabilities of DeepSeek-R1 in Azure AI F
55
manager: scottpolly
66
ms.service: azure-ai-model-inference
77
ms.topic: tutorial
8-
ms.date: 05/19/2025
8+
ms.date: 06/26/2025
99
ms.reviewer: fasantia
1010
ms.author: mopeakande
1111
author: msakande
@@ -15,130 +15,105 @@ author: msakande
1515

1616
In this tutorial, you learn:
1717

18-
> [!div class="checklist"]
19-
> * How to create and configure the Azure resources to use DeepSeek-R1 model in Foundry Models.
20-
> * How to configure the model deployment.
21-
> * How to use DeepSeek-R1 using the Azure AI Inference SDK or REST APIs.
22-
> * How to use DeepSeek-R1 using other SDKs.
18+
* How to create and configure the Azure resources to use DeepSeek-R1 in Azure AI Foundry Models.
19+
* How to configure the model deployment.
20+
* How to use DeepSeek-R1 with the Azure AI Inference SDK or REST APIs.
21+
* How to use DeepSeek-R1 with other SDKs.
2322

2423
## Prerequisites
2524

2625
To complete this article, you need:
2726

2827

29-
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md) if that's your case.
28+
* An Azure subscription. If you're using [GitHub Models](https://docs.github.com/en/github-models/), you can upgrade your experience and create an Azure subscription in the process. Read [Upgrade from GitHub Models to Azure AI Foundry Models](../../model-inference/how-to/quickstart-github-models.md), if that applies to you.
3029

3130

3231
[!INCLUDE [about-reasoning](../../foundry-models/includes/use-chat-reasoning/about-reasoning.md)]
3332

3433
## Create the resources
3534

3635

37-
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed. The following picture shows the high level architecture.
36+
Foundry Models is a capability in Azure AI Foundry resources in Azure. You can create model deployments under the resource to consume their predictions. You can also connect the resource to Azure AI Hubs and Projects in Azure AI Foundry to create intelligent applications if needed.
3837

39-
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/resources-architecture.png" alt-text="A diagram showing the high level architecture of the resources created in the tutorial." lightbox="../media/quickstart-get-started-deepseek-r1/resources-architecture.png":::
38+
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources, using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code, with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
4039

41-
To create an Azure AI project that supports deployment for DeepSeek-R1, follow these steps. You can also create the resources using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
4240

41+
1. Sign in to [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
4342

44-
1. Go to [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs) and log in with your account.
43+
1. Go to the preview features icon on the header of the landing page and make sure that the **Deploy models to Azure AI Foundry resources** feature is turned on.
4544

46-
2. On the landing page, select **Create project**.
45+
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png" alt-text="A screenshot showing the steps to enable deployment to a Foundry resource." lightbox="../media/quickstart-get-started-deepseek-r1/enable-foundry-resource-deployment.png":::
46+
47+
1. On the landing page, go to the "Explore models and capabilities" section and select **Go to full model catalog** to open the model catalog.
48+
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png" alt-text="A screenshot of the homepage of the Foundry portal showing the model catalog section." lightbox="../media/quickstart-get-started-deepseek-r1/foundry-homepage-model-catalog-section.png":::
49+
50+
1. Search for the **DeepSeek-R1** model and open its model card.
51+
52+
1. Select **Use this model**. This opens up a wizard to create an Azure AI Foundry project and resources that you'll work in. You can keep the default name for the project or change it.
4753

4854
> [!TIP]
49-
> **Are you using Azure OpenAI in Azure AI Foundry Models?** When you are connected to Azure AI Foundry portal using an Azure OpenAI resource, only Azure OpenAI models show up in the catalog. To view the full list of models, including DeepSeek-R1, use the top **Announcements** section and locate the card with the option **Explore more models**.
55+
> **Are you using Azure OpenAI in Azure AI Foundry Models?** When you're connected to Azure AI Foundry portal using an Azure OpenAI resource, only Azure OpenAI models show up in the catalog. To view the full list of models, including DeepSeek-R1, use the top **Announcements** section and locate the card with the option **Explore more models**.
5056
>
5157
> :::image type="content" source="../media/quickstart-get-started-deepseek-r1/explore-more-models.png" alt-text="Screenshot showing the card with the option to explore all the models from the catalog." lightbox="../media/quickstart-get-started-deepseek-r1/explore-more-models.png":::
5258
>
5359
> A new window shows up with the full list of models. Select **DeepSeek-R1** from the list and select **Deploy**. The wizard asks to create a new project.
5460
55-
3. Give the project a name, for example "my-project".
56-
57-
4. In this tutorial, we create a brand new project under a new AI hub, hence, select **Create new hub**. Hubs are containers for multiple projects and allow you to share resources across all the projects.
58-
59-
5. Give the hub a name, for example "my-hub" and select **Next**.
60-
61-
6. The wizard updates with details about the resources that are going to be created. Select **Azure resources to be created** to see the details.
62-
63-
:::image type="content" source="../media/create-resources/create-project-with-hub-details.png" alt-text="Screenshot showing the details of the project and hub to be created." lightbox="../media/create-resources/create-project-with-hub-details.png":::
64-
65-
7. You can see that the following resources are created:
61+
1. Select the dropdown in the "Advanced options" section of the wizard to see the details of other defaults created alongside the project. These defaults are selected for optimal functionality and include:
6662

6763
| Property | Description |
6864
| -------------- | ----------- |
6965
| Resource group | The main container for all the resources in Azure. This helps get resources that work together organized. It also helps to have a scope for the costs associated with the entire project. |
70-
| Location | The region of the resources that you're creating. |
71-
| Hub | The main container for AI projects in Azure AI Foundry. Hubs promote collaboration and allow you to store information for your projects. |
72-
| AI Foundry | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known Azure AI Services) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models you can read [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
73-
74-
75-
8. Select **Create**. The resources creation process starts.
76-
77-
9. Once completed, your project is ready to be configured.
78-
79-
10. Foundry Models is a Preview feature that needs to be turned on in Azure AI Foundry. At the top navigation bar, over the right corner, select the **Preview features** icon. A contextual blade shows up at the right of the screen.
66+
| Region | The region of the resources that you're creating. |
67+
| AI Foundry resource | The resource enabling access to the flagship models in Azure AI model catalog. In this tutorial, a new account is created, but Azure AI Foundry resources (formerly known as Azure AI Services) can be shared across multiple hubs and projects. Hubs use a connection to the resource to have access to the model deployments available there. To learn how you can create connections to Azure AI Foundry resources to consume models you can read [Connect your AI project](../../model-inference/how-to/configure-project-connection.md). |
8068

81-
11. Turn on the **Deploy models to Azure AI model inference service** feature.
69+
1. Select **Create** to create the Foundry project alongside the other defaults. Wait until the project creation is complete. This process takes a few minutes.
8270

83-
:::image type="content" source="../media/quickstart-ai-project/ai-project-inference-endpoint.gif" alt-text="An animation showing how to turn on the Azure AI model inference service deploy models feature in Azure AI Foundry portal." lightbox="../media/quickstart-ai-project/ai-project-inference-endpoint.gif":::
71+
## Deploy the model
8472

85-
12. Close the panel.
86-
87-
88-
## Add DeepSeek-R1 model deployment
89-
90-
Let's now create a new model deployment for DeepSeek-R1:
91-
92-
1. Go to **Model catalog** section in [Azure AI Foundry portal](https://ai.azure.com/explore/models) and find the model [DeepSeek-R1](https://ai.azure.com/explore/models/DeepSeek-R1/version/1/registry/azureml-deepseek) model.
93-
94-
3. You can review the details of the model in the model card.
95-
96-
4. Select **Deploy**.
97-
98-
5. The wizard shows the model's terms and conditions for DeepSeek-R1, which is offered as a Microsoft first party consumption service. You can review our privacy and security commitments under [Data, privacy, and Security](../../../ai-studio/how-to/concept-data-privacy.md).
73+
1. Once the project and resources are created, a deployment wizard appears. DeepSeek-R1 is offered as a Microsoft first party consumption service. You can review our privacy and security commitments under [Data, privacy, and Security](../../../ai-studio/how-to/concept-data-privacy.md).
9974

10075
> [!TIP]
101-
> Review the pricing details for the model by selecting [Pricing and terms](https://aka.ms/DeepSeekPricing).
76+
> Review the pricing details for the model by selecting the [Pricing and terms tab](https://aka.ms/DeepSeekPricing).
10277
103-
6. Accept the terms on those cases by selecting **Subscribe and deploy**.
78+
1. Select **Agree and Proceed** to continue with the deployment.
10479

105-
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/models-deploy-agree.png" alt-text="Screenshot showing how to agree the terms and conditions of a DeepSeek-R1 model." lightbox="../media/quickstart-get-started-deepseek-r1/models-deploy-agree.png":::
80+
1. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for requests to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations.
10681

107-
7. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This allows you to also configure specific names for your models when you attach specific configurations.
82+
1. Azure AI Foundry automatically selects the Foundry resource created earlier with your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type which offers higher throughput and performance.
10883

109-
8. We automatically select an Azure AI Services connection depending on your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type which offers higher throughput and performance.
84+
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/model-deploy.png" alt-text="Screenshot showing how to deploy the model." lightbox="../media/quickstart-get-started-deepseek-r1/model-deploy.png":::
11085

111-
9. Select **Deploy**.
86+
1. Select **Deploy**.
11287

113-
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/model-deploy.png" alt-text="Screenshot showing how to deploy the model." lightbox="../media/quickstart-get-started-deepseek-r1/model-deploy.png":::
88+
1. Once the deployment completes, the deployment **Details** page opens up. Now the new model is ready to be used.
11489

115-
10. Once the deployment completes, the new model is listed in the page and it's ready to be used.
11690

117-
## Use the model in playground
91+
## Use the model in the playground
11892

119-
You can get started by using the model in the playground to have an idea of the model capabilities.
93+
You can get started by using the model in the playground to have an idea of the model's capabilities.
12094

121-
1. On the deployment details page, select **Open in playground** option in the top bar.
95+
1. On the deployment details page, select **Open in playground** in the top bar.
12296

123-
2. In the **Deployment** drop down, the deployment you created has been automatically selected.
97+
2. In the **Deployment** drop down, the deployment you created is already automatically selected.
12498

12599
3. Configure the system prompt as needed. In general, reasoning models don't use system messages in the same way that other types of models.
126100

127101
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png" alt-text="Screenshot showing how to select a model deployment to use in playground, configure the system message, and test it out." lightbox="../media/quickstart-get-started-deepseek-r1/playground-chat-models.png":::
128102

129103
4. Type your prompt and see the outputs.
130104

131-
5. Additionally, you can use **View code** so see details about how to access the model deployment programmatically.
105+
5. Additionally, you can use **View code** to see details about how to access the model deployment programmatically.
132106

133107
[!INCLUDE [best-practices](../../foundry-models/includes/use-chat-reasoning/best-practices.md)]
134108

135109
## Use the model in code
136110

137111
Use the Foundry Models endpoint and credentials to connect to the model:
138112

139-
:::image type="content" source="../media/overview/overview-endpoint-and-key.png" alt-text="Screenshot showing how to get the URL and key associated with the resource." lightbox="../media/overview/overview-endpoint-and-key.png":::
113+
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png" alt-text="Screenshot showing how to get the URL and key associated with the deployment." lightbox="../media/quickstart-get-started-deepseek-r1/endpoint-target-and-key.png":::
114+
140115

141-
You can use the Azure AI Inference package to consume the model in code:
116+
You can use the Azure AI Model Inference package to consume the model in code:
142117

143118
[!INCLUDE [code-create-chat-client](../../foundry-models/includes/code-create-chat-client.md)]
144119

0 commit comments

Comments
 (0)