Skip to content

Commit 0eac0c4

Browse files
update for JSR and a bit of cleanup
1 parent 4dfb0c6 commit 0eac0c4

File tree

1 file changed

+65
-20
lines changed

1 file changed

+65
-20
lines changed

README.md

Lines changed: 65 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,51 @@
1-
# OpenAI Node API Library
1+
# OpenAI Library for TypeScript and JavaScript
22

33
[![NPM version](https://img.shields.io/npm/v/openai.svg)](https://npmjs.org/package/openai) ![npm bundle size](https://img.shields.io/bundlephobia/minzip/openai) [![JSR Version](https://jsr.io/badges/@openai/openai)](https://jsr.io/@openai/openai)
44

5-
This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.
5+
This library provides convenient access to the OpenAI REST API in server-side TypeScript or JavaScript applications. It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/). To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs).
66

7-
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
7+
## Installation
88

9-
To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs).
9+
This module is distributed on both the [npm](https://www.npmjs.com/package/openai) and [JSR](https://jsr.io/@openai/openai) registries.
1010

11-
## Installation
11+
**Install from npm**
1212

1313
```sh
1414
npm install openai
15+
yarn add openai
16+
pnpm add openai
17+
bun install openai
18+
```
19+
20+
These commands will make the module importable as the default export from
21+
`openai` in JavaScript runtimes that use npm (Node.js, Cloudflare Workers, Bun, e.g.):
22+
23+
```ts
24+
import OpenAI from 'openai';
25+
```
26+
27+
**Install from JSR**
28+
29+
```sh
30+
deno add jsr:@openai/openai
31+
npx jsr add @openai/openai
32+
yarn dlx jsr add @openai/openai
33+
pnpm dlx jsr add @openai/openai
34+
bunx jsr add @openai/openai
1535
```
1636

17-
You can also import from jsr:
37+
These commands will make the module importable from the `@openai` scope:
38+
39+
```ts
40+
import OpenAI from "@openai/openai";
41+
```
42+
43+
You can also [import directly from JSR](https://jsr.io/docs/using-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:
1844

1945
<!-- x-release-please-start-version -->
2046

2147
```ts
22-
import OpenAI from 'jsr:@openai/openai';
48+
import OpenAI from "jsr:@openai/openai";
2349
```
2450

2551
<!-- x-release-please-end -->
@@ -39,7 +65,7 @@ const client = new OpenAI({
3965
async function main() {
4066
const chatCompletion = await client.chat.completions.create({
4167
messages: [{ role: 'user', content: 'Say this is a test' }],
42-
model: 'gpt-3.5-turbo',
68+
model: 'gpt-4o',
4369
});
4470
}
4571

@@ -57,7 +83,7 @@ const client = new OpenAI();
5783

5884
async function main() {
5985
const stream = await client.chat.completions.create({
60-
model: 'gpt-4',
86+
model: 'gpt-4o',
6187
messages: [{ role: 'user', content: 'Say this is a test' }],
6288
stream: true,
6389
});
@@ -87,7 +113,7 @@ const client = new OpenAI({
87113
async function main() {
88114
const params: OpenAI.Chat.ChatCompletionCreateParams = {
89115
messages: [{ role: 'user', content: 'Say this is a test' }],
90-
model: 'gpt-3.5-turbo',
116+
model: 'gpt-4o',
91117
};
92118
const chatCompletion: OpenAI.Chat.ChatCompletion = await client.chat.completions.create(params);
93119
}
@@ -173,7 +199,7 @@ const openai = new OpenAI();
173199

174200
async function main() {
175201
const stream = await openai.beta.chat.completions.stream({
176-
model: 'gpt-4',
202+
model: 'gpt-4o',
177203
messages: [{ role: 'user', content: 'Say this is a test' }],
178204
stream: true,
179205
});
@@ -226,7 +252,7 @@ const client = new OpenAI();
226252
async function main() {
227253
const runner = client.beta.chat.completions
228254
.runTools({
229-
model: 'gpt-3.5-turbo',
255+
model: 'gpt-4o',
230256
messages: [{ role: 'user', content: 'How is the weather this week?' }],
231257
tools: [
232258
{
@@ -333,7 +359,7 @@ a subclass of `APIError` will be thrown:
333359
```ts
334360
async function main() {
335361
const job = await client.fineTuning.jobs
336-
.create({ model: 'gpt-3.5-turbo', training_file: 'file-abc123' })
362+
.create({ model: 'gpt-4o', training_file: 'file-abc123' })
337363
.catch(async (err) => {
338364
if (err instanceof OpenAI.APIError) {
339365
console.log(err.status); // 400
@@ -368,7 +394,10 @@ Error codes are as followed:
368394
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
369395

370396
```ts
371-
const completion = await client.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4' });
397+
const completion = await client.chat.completions.create({
398+
model: 'gpt-4o',
399+
messages: [{ role: 'user', content: 'Say this is a test' }],
400+
});
372401
console.log(completion._request_id) // req_123
373402
```
374403

@@ -392,7 +421,7 @@ const azureADTokenProvider = getBearerTokenProvider(credential, scope);
392421
const openai = new AzureOpenAI({ azureADTokenProvider });
393422

394423
const result = await openai.chat.completions.create({
395-
model: 'gpt-4-1106-preview',
424+
model: 'gpt-4o',
396425
messages: [{ role: 'user', content: 'Say hello!' }],
397426
});
398427

@@ -415,7 +444,15 @@ const client = new OpenAI({
415444
});
416445

417446
// Or, configure per-request:
418-
await client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in Node.js?' }], model: 'gpt-3.5-turbo' }, {
447+
await client.chat.completions.create({
448+
model: 'gpt-4o',
449+
messages: [
450+
{
451+
role: 'user',
452+
content: 'How can I get the name of the current day in JavaScript?',
453+
}
454+
]
455+
}, {
419456
maxRetries: 5,
420457
});
421458
```
@@ -432,7 +469,15 @@ const client = new OpenAI({
432469
});
433470

434471
// Override per-request:
435-
await client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-3.5-turbo' }, {
472+
await client.chat.completions.create({
473+
model: 'gpt-4o',
474+
messages: [
475+
{
476+
role: 'user',
477+
content: 'How can I list all files in a directory using Python?'
478+
}
479+
]
480+
}, {
436481
timeout: 5 * 1000,
437482
});
438483
```
@@ -485,13 +530,13 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
485530
const client = new OpenAI();
486531

487532
const response = await client.chat.completions
488-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' })
533+
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
489534
.asResponse();
490535
console.log(response.headers.get('X-My-Header'));
491536
console.log(response.statusText); // access the underlying Response object
492537

493538
const { data: chatCompletion, response: raw } = await client.chat.completions
494-
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' })
539+
.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-4o' })
495540
.withResponse();
496541
console.log(raw.headers.get('X-My-Header'));
497542
console.log(chatCompletion);
@@ -622,7 +667,7 @@ TypeScript >= 4.5 is supported.
622667
The following runtimes are supported:
623668

624669
- Node.js 18 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
625-
- Deno v1.28.0 or higher, using `import OpenAI from "npm:openai"`.
670+
- Deno v1.28.0 or higher.
626671
- Bun 1.0 or later.
627672
- Cloudflare Workers.
628673
- Vercel Edge Runtime.

0 commit comments

Comments
 (0)