Skip to content

Commit 4e152aa

Browse files
committed
update date and make acrolinx fixes
1 parent d6bda1d commit 4e152aa

File tree

5 files changed

+34
-33
lines changed

5 files changed

+34
-33
lines changed

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/csharp.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mopeakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 03/20/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: references_regions, tool_generated
@@ -16,7 +16,7 @@ zone_pivot_groups: azure-ai-inference-samples
1616

1717
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1818

19-
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. In addition to text input, multimodal models can accept other input types, such as images or audio input.
19+
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. Apart from text input, multimodal models can accept other input types, such as images or audio input.
2020

2121
## Prerequisites
2222

@@ -26,7 +26,7 @@ To use chat completion models in your application, you need:
2626

2727
[!INCLUDE [how-to-prerequisites-csharp](../how-to-prerequisites-csharp.md)]
2828

29-
* A chat completions model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
29+
* A chat completions model deployment. If you don't have one, read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
3030

3131
* This example uses `phi-4-multimodal-instruct`.
3232

@@ -42,7 +42,7 @@ ChatCompletionsClient client = new ChatCompletionsClient(
4242
);
4343
```
4444

45-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
45+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4646

4747

4848
```csharp
@@ -125,7 +125,7 @@ Usage:
125125
Total tokens: 2506
126126
```
127127

128-
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model may break down a given image on a different number of patches. Read the model card to learn the details.
128+
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model might break down a given image on a different number of patches. Read the model card to learn the details.
129129

130130
> [!IMPORTANT]
131131
> Some models support only one image for each turn in the chat conversation and only the last image is retained in context. If you add multiple images, it results in an error.
@@ -206,4 +206,4 @@ Usage:
206206
Total tokens: 84
207207
```
208208

209-
Audio is broken into tokens and submitted to the model for processing. Some models may operate directly over audio tokens while other may use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.
209+
Audio is broken into tokens and submitted to the model for processing. Some models might operate directly over audio tokens while other might use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/java.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,15 @@ author: mopeakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 03/20/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: references_regions, tool_generated
1414
zone_pivot_groups: azure-ai-inference-samples
1515
---
1616

1717
This article explains how to use chat completions API with models supporting images or audio deployed to Azure AI model inference in Azure AI services.
18+
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. Apart from text input, multimodal models can accept other input types, such as images or audio input.
1819

1920
## Prerequisites
2021

@@ -39,7 +40,7 @@ ChatCompletionsClient client = new ChatCompletionsClientBuilder()
3940
.buildClient();
4041
```
4142

42-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
43+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4344

4445
```java
4546
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
@@ -93,7 +94,7 @@ System.out.println("\tTotal tokens: " + response.getValue().getUsage().getTotalT
9394
System.out.println("\tCompletion tokens: " + response.getValue().getUsage().getCompletionTokens());
9495
```
9596

96-
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model may break down a given image on a different number of patches. Read the model card to learn the details.
97+
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model might break down a given image on a different number of patches. Read the model card to learn the details.
9798

9899
> [!IMPORTANT]
99100
> Some models support only one image for each turn in the chat conversation and only the last image is retained in context. If you add multiple images, it results in an error.
@@ -120,4 +121,4 @@ ChatCompletions response = client.complete(options);
120121

121122
## Use chat completions with audio
122123

123-
Some models can reason across text and audio inputs. This capability is not available in the Azure AI Inference package for Java.
124+
Some models can reason across text and audio inputs. This capability isn't available in the Azure AI Inference package for Java.

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/javascript.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mopeakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 03/20/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: references_regions, tool_generated
@@ -16,7 +16,7 @@ zone_pivot_groups: azure-ai-inference-samples
1616

1717
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1818

19-
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. In addition to text input, multimodal models can accept other input types, such as images and audio input.
19+
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. Apart from text input, multimodal models can accept other input types, such as images or audio input.
2020

2121
## Prerequisites
2222

@@ -26,9 +26,9 @@ To use chat completion models in your application, you need:
2626

2727
[!INCLUDE [how-to-prerequisites-javascript](../how-to-prerequisites-javascript.md)]
2828

29-
* A chat completions model deployment with support for **audio and images**. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
29+
* A chat completions model deployment with support for **audio and images**. If you don't have one, see [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
3030

31-
* This tutorial uses `Phi-4-multimodal-instruct`.
31+
* This article uses `Phi-4-multimodal-instruct`.
3232

3333
## Use chat completions
3434

@@ -41,7 +41,7 @@ const client = new ModelClient(
4141
);
4242
```
4343

44-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
44+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4545

4646
```javascript
4747
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
@@ -130,7 +130,7 @@ Usage:
130130
Total tokens: 2506
131131
```
132132

133-
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model may break down a given image on a different number of patches. Read the model card to learn the details.
133+
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model might break down a given image on a different number of patches. Read the model card to learn the details.
134134

135135
> [!IMPORTANT]
136136
> Some models support only one image for each turn in the chat conversation and only the last image is retained in context. If you add multiple images, it results in an error.
@@ -244,4 +244,4 @@ const response = await client.path("/chat/completions").post({
244244
});
245245
```
246246

247-
Audio is broken into tokens and submitted to the model for processing. Some models may operate directly over audio tokens while other may use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.
247+
Audio is broken into tokens and submitted to the model for processing. Some models might operate directly over audio tokens while other might use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/python.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mopeakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 03/20/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: references_regions, tool_generated
@@ -16,7 +16,7 @@ zone_pivot_groups: azure-ai-inference-samples
1616

1717
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1818

19-
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. In addition to text input, multimodal models can accept other input types, such as images and audio input.
19+
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. Apart from text input, multimodal models can accept other input types, such as images or audio input.
2020

2121
## Prerequisites
2222

@@ -26,9 +26,9 @@ To use chat completion models in your application, you need:
2626

2727
[!INCLUDE [how-to-prerequisites-python](../how-to-prerequisites-python.md)]
2828

29-
* A chat completions model deployment with support for **audio and images**. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
29+
* A chat completions model deployment with support for **audio and images**. If you don't have one, see [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
3030

31-
* This tutorial uses `Phi-4-multimodal-instruct`.
31+
* This article uses `Phi-4-multimodal-instruct`.
3232

3333
## Use chat completions
3434

@@ -47,7 +47,7 @@ client = ChatCompletionsClient(
4747
)
4848
```
4949

50-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
50+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
5151

5252

5353
```python
@@ -133,7 +133,7 @@ Usage:
133133
Total tokens: 2506
134134
```
135135

136-
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model may break down a given image on a different number of patches. Read the model card to learn the details.
136+
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model might break down a given image on a different number of patches. Read the model card to learn the details.
137137

138138
## Use chat completions with audio
139139

@@ -214,4 +214,4 @@ response = client.complete(
214214
)
215215
```
216216

217-
Audio is broken into tokens and submitted to the model for processing. Some models may operate directly over audio tokens while other may use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.
217+
Audio is broken into tokens and submitted to the model for processing. Some models might operate directly over audio tokens while other might use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/rest.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mopeakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 1/21/2025
10+
ms.date: 03/20/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: references_regions, tool_generated
@@ -16,17 +16,17 @@ zone_pivot_groups: azure-ai-inference-samples
1616

1717
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
1818

19-
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. In addition to text input, multimodal models can accept other input types, such as images and audio input.
19+
This article explains how to use chat completions API with _multimodal_ models deployed to Azure AI model inference in Azure AI services. Apart from text input, multimodal models can accept other input types, such as images or audio input.
2020

2121
## Prerequisites
2222

2323
To use chat completion models in your application, you need:
2424

2525
[!INCLUDE [how-to-prerequisites](../how-to-prerequisites.md)]
2626

27-
* A chat completions model deployment. If you don't have one read [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
27+
* A chat completions model deployment. If you don't have one, see [Add and configure models to Azure AI services](../../how-to/create-model-deployments.md) to add a chat completions model to your resource.
2828

29-
* This tutorial uses `Phi-4-multimodal-instruct`.
29+
* This article uses `Phi-4-multimodal-instruct`.
3030

3131

3232
## Use chat completions
@@ -39,15 +39,15 @@ Content-Type: application/json
3939
api-key: <key>
4040
```
4141

42-
If you have configured the resource with **Microsoft Entra ID** support, pass you token in the `Authorization` header with the format `Bearer <token>`. Use scope `https://cognitiveservices.azure.com/.default`.
42+
If you've configured the resource with **Microsoft Entra ID** support, pass your token in the `Authorization` header with the format `Bearer <token>`. Use scope `https://cognitiveservices.azure.com/.default`.
4343

4444
```http
4545
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
4646
Content-Type: application/json
4747
Authorization: Bearer <token>
4848
```
4949

50-
Using Microsoft Entra ID may require additional configuration in your resource to grant access. Learn how to [configure key-less authentication with Microsoft Entra ID](../../how-to/configure-entra-id.md).
50+
Using Microsoft Entra ID might require extra configuration in your resource to grant access. Learn how to [configure key-less authentication with Microsoft Entra ID](../../how-to/configure-entra-id.md).
5151

5252
## Use chat completions with images
5353

@@ -59,7 +59,7 @@ Some models can reason across text and images and generate text completions base
5959
To see this capability, download an image and encode the information as `base64` string. The resulting data should be inside of a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs):
6060

6161
> [!TIP]
62-
> You will need to construct the data URL using a scripting or programming language. This tutorial uses [this sample image](../../../../ai-foundry/media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
62+
> You'll need to construct the data URL using a scripting or programming language. This article uses [this sample image](../../../../ai-foundry/media/how-to/sdks/small-language-models-chart-example.jpg) in JPEG format. A data URL has a format as follows: `data:image/jpg;base64,0xABCDFGHIJKLMNOPQRSTUVWXYZ...`.
6363
6464
Visualize the image:
6565

@@ -123,7 +123,7 @@ The response is as follows, where you can see the model's usage statistics:
123123
}
124124
```
125125

126-
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model may break down a given image on a different number of patches. Read the model card to learn the details.
126+
Images are broken into tokens and submitted to the model for processing. When referring to images, each of those tokens is typically referred as *patches*. Each model might break down a given image on a different number of patches. Read the model card to learn the details.
127127

128128
## Use chat completions with audio
129129

@@ -244,4 +244,4 @@ The response is as follows, where you can see the model's usage statistics:
244244
}
245245
```
246246

247-
Audio is broken into tokens and submitted to the model for processing. Some models may operate directly over audio tokens while other may use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.
247+
Audio is broken into tokens and submitted to the model for processing. Some models might operate directly over audio tokens while others might use internal modules to perform speech-to-text, resulting in different strategies to compute tokens. Read the model card for details about how each model operates.

0 commit comments

Comments
 (0)