Skip to content

Commit e0d0ae0

Browse files
committed
fixes
1 parent 755cc31 commit e0d0ae0

File tree

5 files changed

+19
-13
lines changed

5 files changed

+19
-13
lines changed

articles/ai-foundry/model-inference/includes/use-chat-reasoning/csharp.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ To complete this tutorial, you need:
1919

2020
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2121

22-
* This examples uses `DeepSeek-R1`.
22+
* This example uses `DeepSeek-R1`.
2323

2424
* Install the Azure AI inference package with the following command:
2525

@@ -46,7 +46,7 @@ ChatCompletionsClient client = new ChatCompletionsClient(
4646
```
4747

4848
> [!TIP]
49-
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints doesn't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
49+
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
5050
5151
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
5252
@@ -60,7 +60,7 @@ client = new ChatCompletionsClient(
6060
6161
### Create a chat completion request
6262
63-
The following example shows how you can create a basic reasoning capabilities with chat request to the model.
63+
The following example shows how you can create a basic chat request to the model.
6464
6565
```csharp
6666
ChatCompletionsOptions requestOptions = new ChatCompletionsOptions()

articles/ai-foundry/model-inference/includes/use-chat-reasoning/java.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ To complete this tutorial, you need:
1919

2020
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2121

22-
* This examples uses `DeepSeek-R1`.
22+
* This examples use `DeepSeek-R1`.
2323

2424
* Add the [Azure AI inference package](https://aka.ms/azsdk/azure-ai-inference/java/reference) to your project:
2525

@@ -69,7 +69,7 @@ ChatCompletionsClient client = new ChatCompletionsClient(
6969
```
7070

7171
> [!TIP]
72-
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints doesn't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
72+
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
7373
7474
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
7575
@@ -83,7 +83,7 @@ client = new ChatCompletionsClient(
8383
8484
### Create a chat completion request
8585
86-
The following example shows how you can create a basic reasoning capabilities with chat request to the model.
86+
The following example shows how you can create a basic chat request to the model.
8787
8888
```java
8989
ChatCompletionsOptions requestOptions = new ChatCompletionsOptions()

articles/ai-foundry/model-inference/includes/use-chat-reasoning/javascript.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ To complete this tutorial, you need:
1919

2020
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2121

22-
* This examples uses `DeepSeek-R1`.
22+
* This examples use `DeepSeek-R1`.
2323

2424
* Install the [Azure Inference library for JavaScript](https://aka.ms/azsdk/azure-ai-inference/javascript/reference) with the following command:
2525

@@ -43,6 +43,9 @@ const client = new ModelClient(
4343
);
4444
```
4545

46+
> [!TIP]
47+
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
48+
4649
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4750

4851
```javascript
@@ -62,7 +65,7 @@ const client = new ModelClient(
6265

6366
### Create a chat completion request
6467

65-
The following example shows how you can create a basic reasoning capabilities with chat request to the model.
68+
The following example shows how you can create a basic chat request to the model.
6669

6770
```javascript
6871
var messages = [

articles/ai-foundry/model-inference/includes/use-chat-reasoning/python.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ To complete this tutorial, you need:
1919

2020
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2121

22-
* This examples uses `DeepSeek-R1`.
22+
* This examples use `DeepSeek-R1`.
2323

2424
* Install the [Azure AI inference package](https://aka.ms/azsdk/azure-ai-inference/python/reference) with the following command:
2525

@@ -44,7 +44,7 @@ client = ChatCompletionsClient(
4444
```
4545

4646
> [!TIP]
47-
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints doesn't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
47+
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
4848
4949
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
5050

@@ -63,7 +63,7 @@ client = ChatCompletionsClient(
6363

6464
### Create a chat completion request
6565

66-
The following example shows how you can create a basic reasoning capabilities with chat request to the model.
66+
The following example shows how you can create a basic chat request to the model.
6767

6868
```python
6969
from azure.ai.inference.models import SystemMessage, UserMessage

articles/ai-foundry/model-inference/includes/use-chat-reasoning/rest.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ To complete this tutorial, you need:
1919

2020
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2121

22-
* This examples uses `DeepSeek-R1`.
22+
* This examples use `DeepSeek-R1`.
2323

2424
## Use reasoning capabilities with chat
2525

@@ -31,6 +31,9 @@ Content-Type: application/json
3131
api-key: <key>
3232
```
3333

34+
> [!TIP]
35+
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
36+
3437
If you have configured the resource with **Microsoft Entra ID** support, pass you token in the `Authorization` header:
3538

3639
```http
@@ -41,7 +44,7 @@ Authorization: Bearer <token>
4144

4245
### Create a chat completion request
4346

44-
The following example shows how you can create a basic reasoning capabilities with chat request to the model.
47+
The following example shows how you can create a basic chat request to the model.
4548

4649
```json
4750
{

0 commit comments

Comments
 (0)