@@ -56,22 +56,10 @@ npm init
56
56
57
57
Install the client libraries with:
58
58
59
- ## [ ** TypeScript** ] ( #tab/typescript )
60
-
61
- ``` console
62
- npm install openai @azure/openai @azure/identity
63
- ```
64
-
65
- The ` @azure/openai ` package provides the types the Azure service objects.
66
-
67
- ## [ ** JavaScript** ] ( #tab/javascript )
68
-
69
59
``` console
70
60
npm install openai @azure/identity
71
61
```
72
62
73
- ---
74
-
75
63
Your app's _ package.json_ file will be updated with the dependencies.
76
64
77
65
## Create a new JavaScript application for image prompts
@@ -81,6 +69,71 @@ Your app's _package.json_ file will be updated with the dependencies.
81
69
1 . Create a _ quickstart.ts_ and paste in the following code.
82
70
83
71
``` typescript
72
+ import " dotenv/config" ;
73
+ import { AzureOpenAI } from " openai" ;
74
+ import type {
75
+ ChatCompletion ,
76
+ ChatCompletionCreateParamsNonStreaming ,
77
+ } from " openai/resources/index" ;
78
+
79
+ // You will need to set these environment variables or edit the following values
80
+ const endpoint = process .env [" AZURE_OPENAI_ENDPOINT" ] || " <endpoint>" ;
81
+ const apiKey = process .env [" AZURE_OPENAI_API_KEY" ] || " <api key>" ;
82
+ const imageUrl = process .env [" IMAGE_URL" ] || " <image url>" ;
83
+
84
+ // Required Azure OpenAI deployment name and API version
85
+ const apiVersion = " 2024-07-01-preview" ;
86
+ const deploymentName = " gpt-4-with-turbo" ;
87
+
88
+ function getClient(): AzureOpenAI {
89
+ return new AzureOpenAI ({
90
+ endpoint ,
91
+ apiKey ,
92
+ apiVersion ,
93
+ deployment: deploymentName ,
94
+ });
95
+ }
96
+ function createMessages(): ChatCompletionCreateParamsNonStreaming {
97
+ return {
98
+ messages: [
99
+ { role: " system" , content: " You are a helpful assistant." },
100
+ {
101
+ role: " user" ,
102
+ content: [
103
+ {
104
+ type: " text" ,
105
+ text: " Describe this picture:" ,
106
+ },
107
+ {
108
+ type: " image_url" ,
109
+ image_url: {
110
+ url: imageUrl ,
111
+ },
112
+ },
113
+ ],
114
+ },
115
+ ],
116
+ model: " " ,
117
+ max_tokens: 2000 ,
118
+ };
119
+ }
120
+ async function printChoices(completion : ChatCompletion ): Promise <void > {
121
+ for (const choice of completion .choices ) {
122
+ console .log (choice .message );
123
+ }
124
+ }
125
+ export async function main() {
126
+ console .log (" == Get GPT-4 Turbo with vision Sample ==" );
127
+
128
+ const client = getClient ();
129
+ const messages = createMessages ();
130
+ const completion = await client .chat .completions .create (messages );
131
+ await printChoices (completion );
132
+ }
133
+
134
+ main ().catch ((err ) => {
135
+ console .error (" Error occurred:" , err );
136
+ });
84
137
```
85
138
1. Make the following changes :
86
139
1. Enter the name of your GPT - 4 Turbo with Vision deployment in the appropriate field .
@@ -106,69 +159,71 @@ Your app's _package.json_ file will be updated with the dependencies.
106
159
1. Replace the contents of _quickstart .js_ with the following code .
107
160
108
161
` ` ` javascript
109
- ` ` `
110
- 1. Make the following changes :
111
- 1. Enter the name of your GPT - 4 Turbo with Vision deployment in the appropriate field .
112
- 1. Change the value of the ` "url" ` field to the URL of your image .
113
- > [! TIP ]
114
- > You can also use a base 64 encoded image data instead of a URL . For more information , see the [GPT - 4 Turbo with Vision how - to guide ](../ how - to / gpt - with - vision .md #use - a - local - image ).
115
- 1. Run the application with the following command :
116
-
117
- ` ` ` console
118
- node quickstart.js
119
- ` ` `
120
-
121
- ## Create a new JavaScript application for image prompt enhancements
122
-
123
- GPT - 4 Turbo with Vision provides exclusive access to Azure AI Services tailored enhancements . When combined with Azure AI Vision , it enhances your chat experience by providing the chat model with more detailed information about visible text in the image and the locations of objects .
124
-
125
- The ** Optical Character Recognition (OCR )** integration allows the model to produce higher quality responses for dense text , transformed images , and number - heavy financial documents . It also covers a wider range of languages .
126
-
127
- The ** object grounding ** integration brings a new layer to data analysis and user interaction , as the feature can visually distinguish and highlight important elements in the images it processes .
128
-
129
- > [! CAUTION ]
130
- > Azure AI enhancements for GPT - 4 Turbo with Vision will be billed separately from the core functionalities . Each specific Azure AI enhancement for GPT - 4 Turbo with Vision has its own distinct charges . For details , see the [special pricing information ](../ concepts / gpt - with - vision .md #special - pricing - information ).
131
-
132
- > [! IMPORTANT ]
133
- > Vision enhancements are not supported by the GPT - 4 Turbo GA model . They are only available with the preview models .
134
-
135
- ## [** TypeScript ** ](#tab / typescript )
136
-
137
- 1. Replace the contents of _quickstart .py_ with the following code .
162
+ import "dotenv/config";
163
+ import { AzureOpenAI } from "openai";
138
164
139
- ` ` ` typescript
165
+ // You will need to set these environment variables or edit the following values
166
+ const endpoint = process.env["AZURE_OPENAI_ENDPOINT"] || "<endpoint>";
167
+ const apiKey = process.env["AZURE_OPENAI_API_KEY"] || "<api key>";
168
+ const imageUrl = process.env["IMAGE_URL"] || "<image url>";
140
169
170
+ // Required Azure OpenAI deployment name and API version
171
+ const apiVersion = "2024-07-01-preview";
172
+ const deploymentName = "gpt-4-with-turbo";
173
+
174
+ function getClient() {
175
+ return new AzureOpenAI({
176
+ endpoint,
177
+ apiKey,
178
+ apiVersion,
179
+ deployment: deploymentName,
180
+ });
181
+ }
182
+ function createMessages() {
183
+ return {
184
+ messages: [
185
+ { role: "system", content: "You are a helpful assistant." },
186
+ {
187
+ role: "user",
188
+ content: [
189
+ {
190
+ type: "text",
191
+ text: "Describe this picture:",
192
+ },
193
+ {
194
+ type: "image_url",
195
+ image_url: {
196
+ url: imageUrl,
197
+ },
198
+ },
199
+ ],
200
+ },
201
+ ],
202
+ model: "",
203
+ max_tokens: 2000,
204
+ };
205
+ }
206
+ async function printChoices(completion) {
207
+ for (const choice of completion.choices) {
208
+ console.log(choice.message);
209
+ }
210
+ }
211
+ export async function main() {
212
+ console.log("== Get GPT-4 Turbo with vision Sample ==");
213
+
214
+ const client = getClient();
215
+ const messages = createMessages();
216
+ const completion = await client.chat.completions.create(messages);
217
+ await printChoices(completion);
218
+ }
219
+
220
+ main().catch((err) => {
221
+ console.error("Error occurred:", err);
222
+ });
141
223
` ` `
142
224
143
225
1. Make the following changes :
144
- 1. Enter your GPT - 4 Turbo with Vision deployment name in the appropriate field .
145
- 1. Enter your Computer Vision endpoint URL and key in the appropriate fields .
146
- 1. Change the value of the ` "url" ` field to the URL of your image .
147
- > [! TIP ]
148
- > You can also use a base 64 encoded image data instead of a URL . For more information , see the [GPT - 4 Turbo with Vision how - to guide ](../ how - to / gpt - with - vision .md #use - a - local - image ).
149
-
150
-
151
- 1. Build the application with the following command :
152
-
153
- ` ` ` console
154
- tsc
155
- ` ` `
156
-
157
- 1. Run the application with the following command :
158
-
159
- ` ` ` console
160
- node quickstart.js
161
- ` ` `
162
-
163
-
164
- ## [** JavaScript ** ](#tab / javascript )
165
-
166
- ` ` ` javascript
167
- ` ` `
168
-
169
- 1. Make the following changes :
170
- 1. Enter your GPT - 4 Turbo with Vision deployment name in the appropriate field .
171
- 1. Enter your Computer Vision endpoint URL and key in the appropriate fields .
226
+ 1. Enter the name of your GPT - 4 Turbo with Vision deployment in the appropriate field .
172
227
1. Change the value of the ` "url" ` field to the URL of your image .
173
228
> [! TIP ]
174
229
> You can also use a base 64 encoded image data instead of a URL . For more information , see the [GPT - 4 Turbo with Vision how - to guide ](../ how - to / gpt - with - vision .md #use - a - local - image ).
@@ -180,24 +235,6 @@ The **object grounding** integration brings a new layer to data analysis and use
180
235
181
236
-- -
182
237
183
- 1. Make the following changes :
184
- 1. Enter your GPT - 4 Turbo with Vision deployment name in the appropriate field .
185
- 1. Enter your Computer Vision endpoint URL and key in the appropriate fields .
186
- 1. Change the value of the ` "url" ` field to the URL of your image .
187
- > [! TIP ]
188
- > You can also use a base 64 encoded image data instead of a URL . For more information , see the [GPT - 4 Turbo with Vision how - to guide ](../ how - to / gpt - with - vision .md #use - a - local - image ).
189
- 1. Run the application with the ` python ` command :
190
-
191
- ` ` ` console
192
- python quickstart.py
193
- ` ` `
194
-
195
- ## Create a new JavaScript application for video prompt enhancements
196
-
197
- Video prompt integration is outside the scope of this quickstart . See the [GPT - 4 Turbo with Vision how - to guide ](../ how - to / gpt - with - vision .md #use - vision - enhancement - with - video ) for detailed instructions on setting up video prompts in chat completions programmatically .
198
-
199
- -- -
200
-
201
238
## Clean up resources
202
239
203
240
If you want to clean up and remove an Azure OpenAI resource , you can delete the resource or resource group . Deleting the resource group also deletes any other resources associated with it .
0 commit comments