You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/tutorials/get-started-deepseek-r1.md
+11-2Lines changed: 11 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,6 +37,9 @@ Azure AI model inference is a capability in Azure AI Services resources in Azure
37
37
38
38
To create an Azure AI project that supports model inference for DeepSeek-R1, follow these steps:
39
39
40
+
> [!TIP]
41
+
> You can also create the resources using [Azure CLI](../how-to/quickstart-create-resources.md?pivots=programming-language-cli) or [infrastructure as code with Bicep](../how-to/quickstart-create-resources.md?pivots=programming-language-bicep).
42
+
40
43
1. Go to [Azure AI Foundry portal](https://ai.azure.com) and log in with your account.
41
44
42
45
2. On the landing page, select **Create project**.
@@ -86,7 +89,7 @@ Let's now create a new model deployment for DeepSeek-R1:
86
89
5. The wizard shows the model's terms and conditions for DeepSeek-R1, which is offered as a Microsoft first party consumption service. You can review our privacy and security commitments under [Data, privacy, and Security](../../../ai-studio/how-to/concept-data-privacy.md).
87
90
88
91
> [!TIP]
89
-
> You can also review the pricing details for the model by seeing[Pricing and terms](https://aka.ms/DeepSeekPricing) tab.
92
+
> Review the pricing details for the model by selecting[Pricing and terms](https://aka.ms/DeepSeekPricing).
90
93
91
94
6. Accept the terms on those cases by selecting **Subscribe and deploy**.
92
95
@@ -96,7 +99,7 @@ Let's now create a new model deployment for DeepSeek-R1:
96
99
97
100
8. We automatically select an Azure AI Services connection depending on your project. Use the **Customize** option to change the connection based on your needs. DeepSeek-R1 is currently offered under the **Global Standard** deployment type which offers higher throughput and performance.
98
101
99
-
9.Select **Deploy**.
102
+
9. Select **Deploy**.
100
103
101
104
:::image type="content" source="../media/quickstart-get-started-deepseek-r1/model-deploy.png" alt-text="Screenshot showing how to deploy the model." lightbox="../media/quickstart-get-started-deepseek-r1/model-deploy.png":::
102
105
@@ -122,6 +125,12 @@ You can get started by using the model in the playground to have an idea of the
122
125
123
126
## Use the model in code
124
127
128
+
Use the Azure AI model inference endpoint and credentials to connect to the model:
129
+
130
+
:::image type="content" source="../media/overview/overview-endpoint-and-key.png" alt-text="Screenshot showing how to get the URL and key associated with the resource." lightbox="../media/overview/overview-endpoint-and-key.png":::
131
+
132
+
You can use the Azure AI Inference package to consume the model in code:
0 commit comments