Skip to content

Commit 1a4a041

Browse files
committed
review
1 parent 4efeb9a commit 1a4a041

File tree

6 files changed

+10
-187
lines changed

6 files changed

+10
-187
lines changed

articles/ai-foundry/model-inference/includes/use-chat-completions/javascript.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,9 @@ To use chat completion models in your application, you need:
2424

2525
[!INCLUDE [how-to-prerequisites](../how-to-prerequisites.md)]
2626

27-
* A chat completions model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
28-
29-
* Install the [Azure Inference library for JavaScript](https://aka.ms/azsdk/azure-ai-inference/javascript/reference) with the following command:
27+
[!INCLUDE [how-to-prerequisites-javascript](../how-to-prerequisites-javascript.md)]
3028

31-
```bash
32-
npm install @azure-rest/ai-inference
33-
```
29+
* A chat completions model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
3430

3531
## Use chat completions
3632

articles/ai-foundry/model-inference/includes/use-chat-completions/python.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -174,15 +174,13 @@ Some models can create JSON outputs. Set `response_format` to `json_object` to e
174174

175175

176176
```python
177-
from azure.ai.inference.models import ChatCompletionsResponseFormatJSON
178-
179177
response = client.complete(
180178
messages=[
181179
SystemMessage(content="You are a helpful assistant that always generate responses in JSON format, using."
182180
" the following format: { ""answer"": ""response"" }."),
183181
UserMessage(content="How many languages are in the world?"),
184182
],
185-
response_format={ "type": ChatCompletionsResponseFormatJSON() }
183+
response_format="json_object"
186184
)
187185
```
188186

@@ -213,9 +211,9 @@ The following code example creates a tool definition that is able to look from f
213211

214212

215213
```python
216-
from azure.ai.inference.models import FunctionDefinition, ChatCompletionsFunctionToolDefinition
214+
from azure.ai.inference.models import FunctionDefinition, ChatCompletionsToolDefinition
217215

218-
flight_info = ChatCompletionsFunctionToolDefinition(
216+
flight_info = ChatCompletionsToolDefinition(
219217
function=FunctionDefinition(
220218
name="get_flight_info",
221219
description="Returns information about the next flight between two cities. This includes the name of the airline, flight number and the date and time of the next flight",

articles/ai-foundry/model-inference/includes/use-chat-completions/rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -553,7 +553,7 @@ Some models can reason across text and images and generate text completions base
553553
To see this capability, download an image and encode the information as `base64` string. The resulting data should be inside of a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs):
554554

555555
> [!TIP]
556-
> You will need to construct the data URL using a scripting or programming language. This tutorial use [this sample image](../../../../ai-foundry/media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
556+
> You will need to construct the data URL using a scripting or programming language. This tutorial uses [this sample image](../../../../ai-foundry/media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
557557
558558
Visualize the image:
559559

articles/ai-foundry/model-inference/includes/use-chat-reasoning/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ To complete this tutorial, you need:
2323

2424
* A model with reasoning capabilities model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a reasoning model.
2525

26-
* This examples use `DeepSeek-R1`.
26+
* This example use `DeepSeek-R1`.
2727

2828
## Use reasoning capabilities with chat
2929

articles/ai-foundry/model-inference/includes/use-embeddings/java.md

Lines changed: 3 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -24,30 +24,7 @@ To use embedding models in your application, you need:
2424

2525
[!INCLUDE [how-to-prerequisites](../how-to-prerequisites.md)]
2626

27-
* An embeddings model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add an embeddings model to your resource.
28-
29-
* Add the Azure AI inference package to your project:
30-
31-
```xml
32-
<dependency>
33-
<groupId>com.azure</groupId>
34-
<artifactId>azure-ai-inference</artifactId>
35-
<version>1.0.0-beta.1</version>
36-
</dependency>
37-
```
38-
39-
> [!TIP]
40-
> Read more about the [Azure AI inference package and reference](https://aka.ms/azsdk/azure-ai-inference/java/reference).
41-
42-
* If you are using Entra ID, you also need the following package:
43-
44-
```xml
45-
<dependency>
46-
<groupId>com.azure</groupId>
47-
<artifactId>azure-identity</artifactId>
48-
<version>1.13.3</version>
49-
</dependency>
50-
```
27+
[!INCLUDE [how-to-prerequisites-java](../how-to-prerequisites-java.md)]
5128

5229
* Import the following namespace:
5330

@@ -65,6 +42,8 @@ To use embedding models in your application, you need:
6542
import java.util.List;
6643
```
6744

45+
* An embeddings model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add an embeddings model to your resource.
46+
6847
## Use embeddings
6948

7049
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.

articles/ai-foundry/model-inference/includes/use-image-generations/rest.md

Lines changed: 0 additions & 150 deletions
This file was deleted.

0 commit comments

Comments
 (0)