You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The example in this article is based on code samples contained in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without having to copy or paste file content, use the following commands to clone the repository and go to the folder for your coding language:
Copy file name to clipboardExpand all lines: articles/ai-services/openai/gpt-v-quickstart.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: 'Quickstart: Use vision-enabled models with the Azure OpenAI Service'
2
+
title: 'Quickstart: Use vision-enabled chats with the Azure OpenAI Service'
3
3
titleSuffix: Azure OpenAI
4
4
description: Use this article to get started using Azure OpenAI to deploy and use the GPT-4 Turbo with Vision model or other vision-enabled models.
5
5
services: cognitive-services
@@ -22,6 +22,10 @@ Get started using GPT-4 Turbo with images with the Azure OpenAI Service.
22
22
>
23
23
> The latest vision-capable models are `gpt-4o` and `gpt-4o mini`. These models are in public preview. The latest available GA model is `gpt-4` version `turbo-2024-04-09`.
24
24
25
+
> [!IMPORTANT]
26
+
> Extra usage fees might apply when using chat completion models with vision functionality.
27
+
28
+
25
29
::: zone pivot="ai-foundry-portal"
26
30
27
31
[!INCLUDE [Azure AI Foundry portal quickstart](includes/gpt-v-studio.md)]
title: 'Quickstart: Use images in chats with the Azure OpenAI Service'
2
+
title: 'Quickstart: Use vision-enabled chats with the Azure OpenAI Service'
3
3
titleSuffix: Azure OpenAI
4
4
description: Use this article to get started using Azure AI Foundry to deploy and use an image-capable model.
5
5
services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: azure-ai-openai
8
8
ms.topic: include
9
9
ms.custom: references_regions, ignite-2024
10
-
ms.date: 12/05/2024
10
+
ms.date: 01/29/2025
11
11
---
12
12
13
-
Start using images in your AI chats with a no-code approach through Azure AI Foundry.
13
+
Use this article to get started using [Azure AI Foundry](https://ai.azure.com) to deploy and test a chat completion model with image understanding.
14
+
14
15
15
16
## Prerequisites
16
17
17
18
- An Azure subscription. <ahref="https://azure.microsoft.com/free/ai-services"target="_blank">Create one for free</a>.
18
-
- An Azure OpenAI Service resource. For more information about resource creation, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
19
+
- Once you have your Azure subscription, <ahref="/azure/ai-services/openai/how-to/create-resource?pivots=web-portal"title="Create an Azure OpenAI resource."target="_blank">create an Azure OpenAI resource </a>.
20
+
For more information about resource creation, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
21
+
- An [Azure AI Foundry hub](/azure/ai-studio/how-to/create-azure-ai-resource) with your Azure OpenAI resource added as a connection.
22
+
23
+
## Prepare your media
24
+
25
+
You need an image to complete this quickstart. You can use this sample image or any other image you have available.
26
+
27
+
:::image type="content" source="/azure/ai-studio/media/quickstarts/multimodal-vision/car-accident.png" alt-text="Photo of a car accident that can be used to complete the quickstart." lightbox="/azure/ai-studio/media/quickstarts/multimodal-vision/car-accident.png":::
19
28
20
29
## Go to Azure AI Foundry
21
30
22
-
Browse to [Azure AI Foundry](https://ai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.
31
+
1. Browse to [Azure AI Foundry](https://ai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.
32
+
1. Select the hub you'd like to work in.
33
+
1. On the left nav menu, select **Models + endpoints** and select **+ Deploy model**.
34
+
1. Choose an image-capable deployment by selecting model name: **gpt-4o** or **gpt-4o-mini**. In the window that appears, select a name and deployment type. Make sure your Azure OpenAI resource is connected. For more information about model deployment, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
35
+
1. Select **Deploy**.
36
+
1. Next, select your new model and select **Open in playground**. In the chat playground, the deployment you created should be selected in the **Deployment** dropdown.
23
37
24
-
Create a project or select an existing one. Navigate to the **Models + endpoints** option on the left, and select **Deploy model**. Choose an image-capable deployment by selecting model name: **gpt-4o** or **gpt-4o-mini**. For more information about model deployment, see the [resource deployment guide](/azure/ai-services/openai/how-to/create-resource).
25
38
26
-
Select the new deployment and select **Open in playground**.
27
39
28
40
## Playground
29
41
30
-
From this page, you can quickly iterate and experiment with the model's capabilities.
42
+
In this chat session, you instruct the assistant to aid you in understanding images that you input.
31
43
32
44
For general help with assistant setup, chat sessions, settings, and panels, refer to the [Chat quickstart](/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line&pivots=programming-language-studio).
33
45
@@ -36,14 +48,50 @@ For general help with assistant setup, chat sessions, settings, and panels, refe
36
48
37
49
In this chat session, you're instructing the assistant to aid in understanding images that you input.
38
50
1. To start, make sure your image-capable deployment is selected in the **Deployment** dropdown.
39
-
2. In the **Setup**pane, provide a System Message to guide the assistant. The default System Message is: "You are an AI assistant that helps people find information." You can tailor the System Message to the image or scenario that you're uploading.
51
+
1. In the context text box on the **Setup**panel, provide this prompt to guide the assistant: `"You're an AI assistant that helps people find information."` Or, you can tailor the prompt to your image or scenario.
40
52
41
53
> [!NOTE]
42
54
> We recommend you update the System Message to be specific to the task in order to avoid unhelpful responses from the model.
43
55
44
-
1. Save your changes, and when prompted to confirm updating the system message, select **Continue**.
45
-
1. In the **Chat session** pane, enter a text prompt like "Describe this image," and upload an image with the attachment button. You can use a different text prompt for your use case. Then select **Send**.
46
-
1. Observe the output provided. Consider asking follow-up questions related to the analysis of your image to learn more.
56
+
1. Select **Apply changes** to save your changes.
57
+
1. In the chat session pane, select the attachment button and then **Upload image**. Choose your image.
58
+
1. Add the following prompt in the chat field: `Describe this image`, and then select the send icon to submit it.
59
+
1. The send icon is replaced by a stop button. If you select it, the assistant stops processing your request. For this quickstart, let the assistant finish its reply.
60
+
1. The assistant replies with a description of the image.
61
+
1. Ask a follow-up question related to the analysis of your image. You could enter, `"What should I highlight about this image to my insurance company?"`.
62
+
1. You should receive a relevant response similar to what's shown here:
63
+
```
64
+
When reporting the incident to your insurance company, you should highlight the following key points from the image:
65
+
66
+
1. **Location of Damage**: Clearly state that the front end of the car, particularly the driver's side, is damaged. Mention the crumpled hood, broken front bumper, and the damaged left headlight.
67
+
68
+
2. **Point of Impact**: Indicate that the car has collided with a guardrail, which may suggest that no other vehicles were involved in the accident.
69
+
70
+
3. **Condition of the Car**: Note that the damage seems to be concentrated on the front end, and there is no visible damage to the windshield or rear of the car from this perspective.
71
+
72
+
4. **License Plate Visibility**: Mention that the license plate is intact and can be used for identification purposes.
73
+
74
+
5. **Environment**: Report that the accident occurred near a roadside with a guardrail, possibly in a rural or semi-rural area, which might help in establishing the accident location and context.
75
+
76
+
6. **Other Observations**: If there were any other circumstances or details not visible in the image that may have contributed to the accident, such as weather conditions, road conditions, or any other relevant information, be sure to include those as well.
77
+
78
+
Remember to be factual and descriptive, avoiding speculation about the cause of the accident, as the insurance company will conduct its own investigation.
79
+
```
80
+
## View and export code
81
+
82
+
At any point in the chat session, you can enable the **Show raw JSON** switch at the top of the chat window to see the conversation formatted as JSON. Heres' what it looks like at the beginning of the quickstart chat session:
83
+
84
+
```json
85
+
[
86
+
{
87
+
"role": "system",
88
+
"content": [
89
+
"You are an AI assistant that helps people find information."
90
+
]
91
+
},
92
+
]
93
+
```
94
+
47
95
48
96
49
97
## Clean up resources
@@ -52,3 +100,5 @@ If you want to clean up and remove an Azure OpenAI resource, you can delete the
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/platform/java.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ Install a Java Development Kit such as [Azul Zulu OpenJDK](https://www.azul.com/
29
29
30
30
## Install the Speech SDK for Java
31
31
32
-
Some of the instructions use a specific SDK version such as `1.24.2`. To check the latest version, [search our GitHub repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk/search?q=com.microsoft.cognitiveservices.speech%3Aclient-sdk).
32
+
Some of the instructions use a specific SDK version such as `1.42.0`. To check the latest version, [search our GitHub repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk/search?q=com.microsoft.cognitiveservices.speech%3Aclient-sdk).
0 commit comments