Skip to content

Commit 6d5a6e8

Browse files
authored
Merge pull request #4042 from santiagxf/satiagxf/js
Fix JS examples
2 parents edd958c + ec9d6e3 commit 6d5a6e8

File tree

6 files changed

+75
-74
lines changed

6 files changed

+75
-74
lines changed

articles/ai-foundry/model-inference/includes/how-to-prerequisites-javascript.md

Lines changed: 30 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,36 @@ author: santiagxf
1111

1212
```bash
1313
npm install @azure-rest/ai-inference
14+
npm install @azure/core-auth
15+
npm install @azure/identity
1416
```
1517

16-
* Import the following modules:
18+
If you are using Node.js, you can configure the dependencies in **package.json**:
1719

18-
```javascript
19-
import ModelClient, { isUnexpected } from "@azure-rest/ai-inference";
20-
import { AzureKeyCredential } from "@azure/core-auth";
21-
import { DefaultAzureCredential } from "@azure/identity";
22-
import { createRestError } from "@azure-rest/core-client";
23-
```
24-
20+
__package.json__
21+
22+
```json
23+
{
24+
"name": "main_app",
25+
"version": "1.0.0",
26+
"description": "",
27+
"main": "app.js",
28+
"type": "module",
29+
"dependencies": {
30+
"@azure-rest/ai-inference": "1.0.0-beta.6",
31+
"@azure/core-auth": "1.9.0",
32+
"@azure/core-sse": "2.2.0",
33+
"@azure/identity": "4.8.0"
34+
}
35+
}
36+
```
37+
38+
* Import the following:
39+
40+
```javascript
41+
import ModelClient from "@azure-rest/ai-inference";
42+
import { isUnexpected } from "@azure-rest/ai-inference";
43+
import { createSseStream } from "@azure/core-sse";
44+
import { AzureKeyCredential } from "@azure/core-auth";
45+
import { DefaultAzureCredential } from "@azure/identity";
46+
```

articles/ai-foundry/model-inference/includes/use-chat-completions/javascript.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -32,29 +32,22 @@ To use chat completion models in your application, you need:
3232

3333
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3434

35-
3635
```javascript
37-
import ModelClient from "@azure-rest/ai-inference";
38-
import { isUnexpected } from "@azure-rest/ai-inference";
39-
import { AzureKeyCredential } from "@azure/core-auth";
40-
41-
const client = new ModelClient(
42-
process.env.AZURE_INFERENCE_ENDPOINT,
36+
const client = ModelClient(
37+
"https://<resource>.services.ai.azure.com/models",
4338
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL)
4439
);
4540
```
4641

4742
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4843

49-
5044
```javascript
51-
import ModelClient from "@azure-rest/ai-inference";
52-
import { isUnexpected } from "@azure-rest/ai-inference";
53-
import { DefaultAzureCredential } from "@azure/identity";
45+
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
5446

55-
const client = new ModelClient(
56-
process.env.AZURE_INFERENCE_ENDPOINT,
47+
const client = ModelClient(
48+
"https://<resource>.services.ai.azure.com/models",
5749
new DefaultAzureCredential()
50+
clientOptions,
5851
);
5952
```
6053

@@ -70,6 +63,7 @@ var messages = [
7063

7164
var response = await client.path("/chat/completions").post({
7265
body: {
66+
model: "mistral-large-2407",
7367
messages: messages,
7468
}
7569
});
@@ -122,7 +116,9 @@ var messages = [
122116

123117
var response = await client.path("/chat/completions").post({
124118
body: {
119+
model: "mistral-large-2407",
125120
messages: messages,
121+
stream: true,
126122
}
127123
}).asNodeStream();
128124
```
@@ -148,7 +144,7 @@ for await (const event of sses) {
148144
return;
149145
}
150146
for (const choice of (JSON.parse(event.data)).choices) {
151-
console.log(choice.delta?.content ?? "");
147+
process.stdout.write(choice.delta?.content ?? "");
152148
}
153149
}
154150
```
@@ -165,6 +161,7 @@ var messages = [
165161

166162
var response = await client.path("/chat/completions").post({
167163
body: {
164+
model: "mistral-large-2407",
168165
messages: messages,
169166
presence_penalty: "0.1",
170167
frequency_penalty: "0.8",
@@ -195,6 +192,7 @@ var messages = [
195192

196193
var response = await client.path("/chat/completions").post({
197194
body: {
195+
model: "mistral-large-2407",
198196
messages: messages,
199197
response_format: { type: "json_object" }
200198
}
@@ -217,6 +215,7 @@ var response = await client.path("/chat/completions").post({
217215
"extra-params": "pass-through"
218216
},
219217
body: {
218+
model: "mistral-large-2407",
220219
messages: messages,
221220
logprobs: true
222221
}
@@ -280,6 +279,7 @@ Prompt the model to book flights with the help of this function:
280279
```javascript
281280
var result = await client.path("/chat/completions").post({
282281
body: {
282+
model: "mistral-large-2407",
283283
messages: messages,
284284
tools: tools,
285285
tool_choice: "auto"

articles/ai-foundry/model-inference/includes/use-chat-multi-modal/javascript.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ To use chat completion models in your application, you need:
3535
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3636

3737
```javascript
38-
const client = new ModelClient(
38+
const client = ModelClient(
3939
"https://<resource>.services.ai.azure.com/models",
40-
new AzureKeyCredential(process.env.AZUREAI_ENDPOINT_KEY)
40+
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL)
4141
);
4242
```
4343

@@ -46,9 +46,9 @@ If you've configured the resource with **Microsoft Entra ID** support, you can u
4646
```javascript
4747
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
4848

49-
const client = new ModelClient(
49+
const client = ModelClient(
5050
"https://<resource>.services.ai.azure.com/models",
51-
new DefaultAzureCredential(),
51+
new DefaultAzureCredential()
5252
clientOptions,
5353
);
5454
```

articles/ai-foundry/model-inference/includes/use-chat-reasoning/javascript.md

Lines changed: 5 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -30,31 +30,20 @@ To complete this tutorial, you need:
3030
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3131

3232
```javascript
33-
import ModelClient from "@azure-rest/ai-inference";
34-
import { isUnexpected } from "@azure-rest/ai-inference";
35-
import { AzureKeyCredential } from "@azure/core-auth";
36-
37-
const client = new ModelClient(
38-
process.env.AZURE_INFERENCE_ENDPOINT,
33+
const client = ModelClient(
34+
"https://<resource>.services.ai.azure.com/models",
3935
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL)
4036
);
4137
```
4238

43-
> [!TIP]
44-
> Verify that you have deployed the model to Azure AI Services resource with the Azure AI model inference API. `Deepseek-R1` is also available as Serverless API Endpoints. However, those endpoints don't take the parameter `model` as explained in this tutorial. You can verify that by going to [Azure AI Foundry portal]() > Models + endpoints, and verify that the model is listed under the section **Azure AI Services**.
45-
46-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
39+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
4740

4841
```javascript
49-
import ModelClient from "@azure-rest/ai-inference";
50-
import { isUnexpected } from "@azure-rest/ai-inference";
51-
import { DefaultAzureCredential } from "@azure/identity";
52-
5342
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
5443

55-
const client = new ModelClient(
44+
const client = ModelClient(
5645
"https://<resource>.services.ai.azure.com/models",
57-
new DefaultAzureCredential(),
46+
new DefaultAzureCredential()
5847
clientOptions,
5948
);
6049
```

articles/ai-foundry/model-inference/includes/use-embeddings/javascript.md

Lines changed: 15 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -32,31 +32,22 @@ To use embedding models in your application, you need:
3232

3333
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3434

35-
3635
```javascript
37-
import ModelClient from "@azure-rest/ai-inference";
38-
import { isUnexpected } from "@azure-rest/ai-inference";
39-
import { AzureKeyCredential } from "@azure/core-auth";
40-
41-
const client = new ModelClient(
42-
process.env.AZURE_INFERENCE_ENDPOINT,
43-
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL),
44-
"text-embedding-3-small"
36+
const client = ModelClient(
37+
"https://<resource>.services.ai.azure.com/models",
38+
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL)
4539
);
4640
```
4741

48-
If you have configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
49-
42+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
5043

5144
```javascript
52-
import ModelClient from "@azure-rest/ai-inference";
53-
import { isUnexpected } from "@azure-rest/ai-inference";
54-
import { DefaultAzureCredential } from "@azure/identity";
55-
56-
const client = new ModelClient(
57-
process.env.AZURE_INFERENCE_ENDPOINT,
58-
new DefaultAzureCredential(),
59-
"text-embedding-3-small"
45+
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
46+
47+
const client = ModelClient(
48+
"https://<resource>.services.ai.azure.com/models",
49+
new DefaultAzureCredential()
50+
clientOptions,
6051
);
6152
```
6253

@@ -67,6 +58,7 @@ Create an embedding request to see the output of the model.
6758
```javascript
6859
var response = await client.path("/embeddings").post({
6960
body: {
61+
model: "text-embedding-3-small",
7062
input: ["The ultimate answer to the question of life"],
7163
}
7264
});
@@ -94,6 +86,7 @@ It can be useful to compute embeddings in input batches. The parameter `inputs`
9486
```javascript
9587
var response = await client.path("/embeddings").post({
9688
body: {
89+
model: "text-embedding-3-small",
9790
input: [
9891
"The ultimate answer to the question of life",
9992
"The largest planet in our solar system is Jupiter",
@@ -126,6 +119,7 @@ You can specify the number of dimensions for the embeddings. The following examp
126119
```javascript
127120
var response = await client.path("/embeddings").post({
128121
body: {
122+
model: "text-embedding-3-small",
129123
input: ["The ultimate answer to the question of life"],
130124
dimensions: 1024,
131125
}
@@ -142,6 +136,7 @@ The following example shows how to create embeddings that are used to create an
142136
```javascript
143137
var response = await client.path("/embeddings").post({
144138
body: {
139+
model: "text-embedding-3-small",
145140
input: ["The answer to the ultimate question of life, the universe, and everything is 42"],
146141
input_type: "document",
147142
}
@@ -154,6 +149,7 @@ When you work on a query to retrieve such a document, you can use the following
154149
```javascript
155150
var response = await client.path("/embeddings").post({
156151
body: {
152+
model: "text-embedding-3-small",
157153
input: ["What's the ultimate meaning of life?"],
158154
input_type: "query",
159155
}

articles/ai-foundry/model-inference/includes/use-image-embeddings/javascript.md

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -34,28 +34,22 @@ To use embedding models in your application, you need:
3434

3535
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
3636

37-
3837
```javascript
39-
import ModelClient from "@azure-rest/ai-inference";
40-
import { isUnexpected } from "@azure-rest/ai-inference";
41-
import { AzureKeyCredential } from "@azure/core-auth";
42-
43-
const client = new ModelClient(
44-
process.env.AZURE_INFERENCE_ENDPOINT,
38+
const client = ModelClient(
39+
"https://<resource>.services.ai.azure.com/models",
4540
new AzureKeyCredential(process.env.AZURE_INFERENCE_CREDENTIAL)
4641
);
4742
```
4843

49-
If you configured the resource to with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
44+
If you've configured the resource with **Microsoft Entra ID** support, you can use the following code snippet to create a client.
5045

5146
```javascript
52-
import ModelClient from "@azure-rest/ai-inference";
53-
import { isUnexpected } from "@azure-rest/ai-inference";
54-
import { DefaultAzureCredential } from "@azure/identity";
47+
const clientOptions = { credentials: { "https://cognitiveservices.azure.com" } };
5548

56-
const client = new ModelClient(
57-
process.env.AZURE_INFERENCE_ENDPOINT,
49+
const client = ModelClient(
50+
"https://<resource>.services.ai.azure.com/models",
5851
new DefaultAzureCredential()
52+
clientOptions,
5953
);
6054
```
6155

0 commit comments

Comments
 (0)