Skip to content

Commit fe637f6

Browse files
feat(client): add support for private API
1 parent 5f6edc3 commit fe637f6

File tree

2 files changed

+46
-2
lines changed

2 files changed

+46
-2
lines changed

README.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,13 @@
44

55
## Updates
66
<details open>
7+
<summary><strong>2023-02-11</strong></summary>
8+
9+
With the help of @PawanOsman, we've figured out a way to continue using the ChatGPT raw models directly. In an attempt to prevent the models from being disabled again, we've decided to provide a reverse proxy (i.e. a private API server) with a completions endpoint that's compatible with the OpenAI API. I've updated `ChatGPTClient` to support using a reverse proxy URL endpoint instead of the default OpenAI API endpoint. See [Using a Reverse Proxy](#using-a-reverse-proxy) for more information.
10+
11+
Please note that if you choose to go this route, you are exposing your API key or session token to a third-party server. If you are concerned about this, you may choose to use the official OpenAI API instead, with the `text-davinci-003` model.
12+
</details>
13+
<details>
714
<summary><strong>2023-02-10</strong></summary>
815

916
~~I've found a new working model for `text-chat-davinci-002`, `text-chat-davinci-002-sh-alpha-aoruigiofdj83`. This is the raw model that the ChatGPT Plus "Turbo" version uses. Responses are blazing fast. I've updated the library to use this model.~~
@@ -75,6 +82,7 @@ By itself, the model does not have any conversational support, so `ChatGPTClient
7582
* [Module](#module)
7683
* [API Server](#api-server)
7784
* [CLI](#cli)
85+
* [Using a Reverse Proxy](#using-a-reverse-proxy)
7886
* [Caveats](#caveats)
7987
* [Contributing](#contributing)
8088
* [License](#license)
@@ -360,6 +368,29 @@ npm run cli
360368

361369
ChatGPT's responses are automatically copied to your clipboard, so you can paste them into other applications.
362370

371+
## Using a Reverse Proxy
372+
As shown in the examples above, you can set `reverseProxyUrl` in `ChatGPTClient`'s options to use a private API instead of the official ChatGPT API.
373+
For now, this is the only way to use the ChatGPT raw models directly.
374+
375+
Depending on whose private API you use, there are some things you have to do differently to make it work with `ChatGPTClient`, and some things may not work as expected. Instructions and any caveats are provided below.
376+
377+
<details open>
378+
<summary><strong>https://chatgpt.pawan.krd/api/completions</strong> (@PawanOsmon)</summary>
379+
380+
#### Instructions
381+
1. Set `reverseProxyUrl` to `https://chatgpt.pawan.krd/api/completions` in `settings.js` or `ChatGPTClient`'s options.
382+
2. Set the OpenAI API key to your ChatGPT session's access token instead of your actual OpenAI API key.
383+
* You can find your ChatGPT session's access token by logging in to [ChatGPT](https://chat.openai.com/) and then going to https://chat.openai.com/api/auth/session (look for the `accessToken` property).
384+
* **Fetching or refreshing your ChatGPT session's access token is not currently supported by this library.**
385+
3. Set the `model` to `text-davinci-002-render`, `text-davinci-002-render-paid`, or any other ChatGPT models that your account has access to. Models **must** be a ChatGPT model name, not the raw model name, and you cannot use a model that your account does not have access to.
386+
* You can check which ones you have access to by opening DevTools and going to the Network tab. Refresh the page and look at the response body for https://chat.openai.com/backend-api/models.
387+
388+
#### Caveats
389+
- Custom stop sequences (`stop`) are not supported. You must handle that yourself or modify your `promptPrefix` to work around it.
390+
- Frequency and presence penalties don't appear to do anything.
391+
- Temperature doesn't work as expected. Setting it to 0-1 appears to have an effect, but setting it above 1 doesn't seem to do anything.
392+
</details>
393+
363394
## Caveats
364395
Since `text-chat-davinci-002` is ChatGPT's raw model, I had to do my best to replicate the way the official ChatGPT website uses it. After extensive testing and comparing responses, I believe that the model used by ChatGPT has some additional fine-tuning.
365396
This means my implementation or the raw model may not behave exactly the same in some ways:

src/ChatGPTClient.js

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ export default class ChatGPTClient {
7474
return new Promise(async (resolve, reject) => {
7575
const controller = new AbortController();
7676
try {
77+
let done = false;
7778
await fetchEventSource(url, {
7879
...opts,
7980
signal: controller.signal,
@@ -96,7 +97,15 @@ export default class ChatGPTClient {
9697
throw error;
9798
},
9899
onclose() {
99-
throw new Error(`Failed to send message. Server closed the connection unexpectedly.`);
100+
if (debug) {
101+
console.debug('Server closed the connection unexpectedly, returning...');
102+
}
103+
// workaround for private API not sending [DONE] event
104+
if (!done) {
105+
onProgress('[DONE]');
106+
controller.abort();
107+
resolve();
108+
}
100109
},
101110
onerror(err) {
102111
if (debug) {
@@ -113,6 +122,7 @@ export default class ChatGPTClient {
113122
onProgress('[DONE]');
114123
controller.abort();
115124
resolve();
125+
done = true;
116126
return;
117127
}
118128
onProgress(JSON.parse(message.data));
@@ -174,6 +184,9 @@ export default class ChatGPTClient {
174184
if (this.options.debug) {
175185
console.debug(token);
176186
}
187+
if (token === this.endToken) {
188+
return;
189+
}
177190
opts.onProgress(token);
178191
reply += token;
179192
});
@@ -182,7 +195,7 @@ export default class ChatGPTClient {
182195
if (this.options.debug) {
183196
console.debug(JSON.stringify(result));
184197
}
185-
reply = result.choices[0].text;
198+
reply = result.choices[0].text.replace(this.endToken, '');
186199
}
187200

188201
// avoids some rendering issues when using the CLI app

0 commit comments

Comments
 (0)