Skip to content

Commit 987f8ac

Browse files
committed
edits
1 parent 2d3c5ab commit 987f8ac

File tree

1 file changed

+125
-88
lines changed

1 file changed

+125
-88
lines changed

articles/ai-services/openai/includes/gpt-v-javascript.md

Lines changed: 125 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -56,22 +56,10 @@ npm init
5656

5757
Install the client libraries with:
5858

59-
## [**TypeScript**](#tab/typescript)
60-
61-
```console
62-
npm install openai @azure/openai @azure/identity
63-
```
64-
65-
The `@azure/openai` package provides the types the Azure service objects.
66-
67-
## [**JavaScript**](#tab/javascript)
68-
6959
```console
7060
npm install openai @azure/identity
7161
```
7262

73-
---
74-
7563
Your app's _package.json_ file will be updated with the dependencies.
7664

7765
## Create a new JavaScript application for image prompts
@@ -81,6 +69,71 @@ Your app's _package.json_ file will be updated with the dependencies.
8169
1. Create a _quickstart.ts_ and paste in the following code.
8270

8371
```typescript
72+
import "dotenv/config";
73+
import { AzureOpenAI } from "openai";
74+
import type {
75+
ChatCompletion,
76+
ChatCompletionCreateParamsNonStreaming,
77+
} from "openai/resources/index";
78+
79+
// You will need to set these environment variables or edit the following values
80+
const endpoint = process.env["AZURE_OPENAI_ENDPOINT"] || "<endpoint>";
81+
const apiKey = process.env["AZURE_OPENAI_API_KEY"] || "<api key>";
82+
const imageUrl = process.env["IMAGE_URL"] || "<image url>";
83+
84+
// Required Azure OpenAI deployment name and API version
85+
const apiVersion = "2024-07-01-preview";
86+
const deploymentName = "gpt-4-with-turbo";
87+
88+
function getClient(): AzureOpenAI {
89+
return new AzureOpenAI({
90+
endpoint,
91+
apiKey,
92+
apiVersion,
93+
deployment: deploymentName,
94+
});
95+
}
96+
function createMessages(): ChatCompletionCreateParamsNonStreaming {
97+
return {
98+
messages: [
99+
{ role: "system", content: "You are a helpful assistant." },
100+
{
101+
role: "user",
102+
content: [
103+
{
104+
type: "text",
105+
text: "Describe this picture:",
106+
},
107+
{
108+
type: "image_url",
109+
image_url: {
110+
url: imageUrl,
111+
},
112+
},
113+
],
114+
},
115+
],
116+
model: "",
117+
max_tokens: 2000,
118+
};
119+
}
120+
async function printChoices(completion: ChatCompletion): Promise<void> {
121+
for (const choice of completion.choices) {
122+
console.log(choice.message);
123+
}
124+
}
125+
export async function main() {
126+
console.log("== Get GPT-4 Turbo with vision Sample ==");
127+
128+
const client = getClient();
129+
const messages = createMessages();
130+
const completion = await client.chat.completions.create(messages);
131+
await printChoices(completion);
132+
}
133+
134+
main().catch((err) => {
135+
console.error("Error occurred:", err);
136+
});
84137
```
85138
1. Make the following changes:
86139
1. Enter the name of your GPT-4 Turbo with Vision deployment in the appropriate field.
@@ -106,69 +159,71 @@ Your app's _package.json_ file will be updated with the dependencies.
106159
1. Replace the contents of _quickstart.js_ with the following code.
107160

108161
```javascript
109-
```
110-
1. Make the following changes:
111-
1. Enter the name of your GPT-4 Turbo with Vision deployment in the appropriate field.
112-
1. Change the value of the `"url"` field to the URL of your image.
113-
> [!TIP]
114-
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
115-
1. Run the application with the following command:
116-
117-
```console
118-
node quickstart.js
119-
```
120-
121-
## Create a new JavaScript application for image prompt enhancements
122-
123-
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored enhancements. When combined with Azure AI Vision, it enhances your chat experience by providing the chat model with more detailed information about visible text in the image and the locations of objects.
124-
125-
The **Optical Character Recognition (OCR)** integration allows the model to produce higher quality responses for dense text, transformed images, and number-heavy financial documents. It also covers a wider range of languages.
126-
127-
The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes.
128-
129-
> [!CAUTION]
130-
> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
131-
132-
> [!IMPORTANT]
133-
> Vision enhancements are not supported by the GPT-4 Turbo GA model. They are only available with the preview models.
134-
135-
## [**TypeScript**](#tab/typescript)
136-
137-
1. Replace the contents of _quickstart.py_ with the following code.
162+
import "dotenv/config";
163+
import { AzureOpenAI } from "openai";
138164
139-
```typescript
165+
// You will need to set these environment variables or edit the following values
166+
const endpoint = process.env["AZURE_OPENAI_ENDPOINT"] || "<endpoint>";
167+
const apiKey = process.env["AZURE_OPENAI_API_KEY"] || "<api key>";
168+
const imageUrl = process.env["IMAGE_URL"] || "<image url>";
140169
170+
// Required Azure OpenAI deployment name and API version
171+
const apiVersion = "2024-07-01-preview";
172+
const deploymentName = "gpt-4-with-turbo";
173+
174+
function getClient() {
175+
return new AzureOpenAI({
176+
endpoint,
177+
apiKey,
178+
apiVersion,
179+
deployment: deploymentName,
180+
});
181+
}
182+
function createMessages() {
183+
return {
184+
messages: [
185+
{ role: "system", content: "You are a helpful assistant." },
186+
{
187+
role: "user",
188+
content: [
189+
{
190+
type: "text",
191+
text: "Describe this picture:",
192+
},
193+
{
194+
type: "image_url",
195+
image_url: {
196+
url: imageUrl,
197+
},
198+
},
199+
],
200+
},
201+
],
202+
model: "",
203+
max_tokens: 2000,
204+
};
205+
}
206+
async function printChoices(completion) {
207+
for (const choice of completion.choices) {
208+
console.log(choice.message);
209+
}
210+
}
211+
export async function main() {
212+
console.log("== Get GPT-4 Turbo with vision Sample ==");
213+
214+
const client = getClient();
215+
const messages = createMessages();
216+
const completion = await client.chat.completions.create(messages);
217+
await printChoices(completion);
218+
}
219+
220+
main().catch((err) => {
221+
console.error("Error occurred:", err);
222+
});
141223
```
142224

143225
1. Make the following changes:
144-
1. Enter your GPT-4 Turbo with Vision deployment name in the appropriate field.
145-
1. Enter your Computer Vision endpoint URL and key in the appropriate fields.
146-
1. Change the value of the `"url"` field to the URL of your image.
147-
> [!TIP]
148-
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
149-
150-
151-
1. Build the application with the following command:
152-
153-
```console
154-
tsc
155-
```
156-
157-
1. Run the application with the following command:
158-
159-
```console
160-
node quickstart.js
161-
```
162-
163-
164-
## [**JavaScript**](#tab/javascript)
165-
166-
```javascript
167-
```
168-
169-
1. Make the following changes:
170-
1. Enter your GPT-4 Turbo with Vision deployment name in the appropriate field.
171-
1. Enter your Computer Vision endpoint URL and key in the appropriate fields.
226+
1. Enter the name of your GPT-4 Turbo with Vision deployment in the appropriate field.
172227
1. Change the value of the `"url"` field to the URL of your image.
173228
> [!TIP]
174229
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
@@ -180,24 +235,6 @@ The **object grounding** integration brings a new layer to data analysis and use
180235

181236
---
182237

183-
1. Make the following changes:
184-
1. Enter your GPT-4 Turbo with Vision deployment name in the appropriate field.
185-
1. Enter your Computer Vision endpoint URL and key in the appropriate fields.
186-
1. Change the value of the `"url"` field to the URL of your image.
187-
> [!TIP]
188-
> You can also use a base 64 encoded image data instead of a URL. For more information, see the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-a-local-image).
189-
1. Run the application with the `python` command:
190-
191-
```console
192-
python quickstart.py
193-
```
194-
195-
## Create a new JavaScript application for video prompt enhancements
196-
197-
Video prompt integration is outside the scope of this quickstart. See the [GPT-4 Turbo with Vision how-to guide](../how-to/gpt-with-vision.md#use-vision-enhancement-with-video) for detailed instructions on setting up video prompts in chat completions programmatically.
198-
199-
---
200-
201238
## Clean up resources
202239

203240
If you want to clean up and remove an Azure OpenAI resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

0 commit comments

Comments
 (0)