Skip to content

Commit 4c452d8

Browse files
authored
Enhance Yorkie Intelligence to support OpenAI-compatible providers (#578)
This commit adds support for OpenAI and OpenAI-compatible providers in addition to Ollama. It also allows custom base URLs and API keys for provider configuration.
1 parent 3e5f134 commit 4c452d8

File tree

3 files changed

+89
-4
lines changed

3 files changed

+89
-4
lines changed

backend/.env.development

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,14 @@ YORKIE_API_ADDR=http://localhost:8080
5050
YORKIE_PROJECT_SECRET_KEY=""
5151

5252
# YORKIE_INTELLIGENCE: Whether to enable Yorkie Intelligence for collaborative editing.
53-
# Available options: false, ollama:llama3.1, ollama:gemma2, ollama:gemma2:2b, ollama:phi3, ollama:mistral, ollama:neural-chat, ollama:starling-lm, ollama:solar, openai:gpt-3.5-turbo, openai:gpt-4o-mini, etc.
54-
# If set to openai:gpt-3.5-turbo or openai:gpt-4o-mini, OPENAI_API_KEY is not required.
53+
# Available providers:
54+
# - ollama: Use Ollama models (requires OLLAMA_HOST_URL)
55+
# Example: ollama:llama3.1, ollama:gemma2, ollama:gemma2:2b, ollama:phi3, ollama:mistral, ollama:neural-chat, ollama:starling-lm, ollama:solar
56+
# - openai: Use OpenAI API (optionally use OPENAI_BASE_URL for custom endpoint)
57+
# Example: openai:gpt-3.5-turbo, openai:gpt-4o-mini
58+
# - openai-compat: Use OpenAI-compatible API servers (requires OPENAI_COMPAT_BASE_URL)
59+
# Example: openai-compat:mistral-7b (for vLLM), openai-compat:gpt-3.5-turbo (for LocalAI)
60+
# Set to "false" to disable.
5561
YORKIE_INTELLIGENCE="ollama:llama3.2:1b"
5662

5763
# OLLAMA_HOST_URL: yorkie-intelligence ollama url
@@ -61,6 +67,17 @@ OLLAMA_HOST_URL=http://localhost:11434
6167
# This key is required when the YORKIE_INTELLIGENCE is set to openai:gpt-3.5-turbo or openai:gpt-4o-mini.
6268
# To obtain an API key, visit OpenAI: https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key
6369
OPENAI_API_KEY=your_openai_api_key_here
70+
# OPENAI_BASE_URL: Custom base URL for OpenAI API (optional).
71+
# Use this to connect to OpenAI-compatible servers using the openai provider.
72+
# Example: https://your-openai-proxy.com/v1
73+
OPENAI_BASE_URL=
74+
75+
# OPENAI_COMPAT_BASE_URL: Base URL for OpenAI-compatible API server (required for openai-compat provider).
76+
# Use this for vLLM, LocalAI, Ollama OpenAI mode, etc.
77+
# Example: http://localhost:8000/v1
78+
OPENAI_COMPAT_BASE_URL=
79+
# OPENAI_COMPAT_API_KEY: API key for OpenAI-compatible server (optional, some servers don't require it).
80+
OPENAI_COMPAT_API_KEY=
6481

6582
# LANGCHAIN_TRACING_V2: Whether LangSmith monitoring for YorkieIntelligence is needed.
6683
# Set to true if LangSmith monitoring is required.

backend/README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,50 @@ pnpm backend start
8080

8181
Starts the server in production mode.
8282

83+
## Yorkie Intelligence Configuration
84+
85+
Yorkie Intelligence provides AI-powered features for collaborative editing. You can configure it using the `YORKIE_INTELLIGENCE` environment variable in `.env.development`.
86+
87+
### Available Providers
88+
89+
| Provider | Format | Description | Required Environment Variables |
90+
| ----------------- | ----------------------- | ------------------------------------------------------- | ------------------------------------------------------------ |
91+
| **ollama** | `ollama:<model>` | Use local Ollama models | `OLLAMA_HOST_URL` |
92+
| **openai** | `openai:<model>` | Use OpenAI API | `OPENAI_API_KEY`, optionally `OPENAI_BASE_URL` |
93+
| **openai-compat** | `openai-compat:<model>` | Use OpenAI-compatible API servers (vLLM, LocalAI, etc.) | `OPENAI_COMPAT_BASE_URL`, optionally `OPENAI_COMPAT_API_KEY` |
94+
95+
### Examples
96+
97+
```bash
98+
# Disable Yorkie Intelligence
99+
YORKIE_INTELLIGENCE="false"
100+
101+
# Use Ollama (local)
102+
YORKIE_INTELLIGENCE="ollama:llama3.2:1b"
103+
OLLAMA_HOST_URL="http://localhost:11434"
104+
105+
# Use OpenAI
106+
YORKIE_INTELLIGENCE="openai:gpt-4o-mini"
107+
OPENAI_API_KEY="sk-xxx"
108+
109+
# Use OpenAI with custom endpoint (proxy)
110+
YORKIE_INTELLIGENCE="openai:gpt-4"
111+
OPENAI_API_KEY="sk-xxx"
112+
OPENAI_BASE_URL="https://your-proxy.com/v1"
113+
114+
# Use vLLM server
115+
YORKIE_INTELLIGENCE="openai-compat:mistral-7b"
116+
OPENAI_COMPAT_BASE_URL="http://localhost:8000/v1"
117+
118+
# Use LocalAI
119+
YORKIE_INTELLIGENCE="openai-compat:gpt-3.5-turbo"
120+
OPENAI_COMPAT_BASE_URL="http://localhost:8080/v1"
121+
122+
# Use Ollama in OpenAI-compatible mode
123+
YORKIE_INTELLIGENCE="openai-compat:llama3"
124+
OPENAI_COMPAT_BASE_URL="http://localhost:11434/v1"
125+
```
126+
83127
## Directory Structure
84128

85129
```

backend/src/langchain/langchain.module.ts

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,37 @@ const chatModelFactory = {
2424
streaming: true,
2525
});
2626
} else if (provider === "openai") {
27-
chatModel = new ChatOpenAI({ modelName: model });
27+
const baseURL = configService.get("OPENAI_BASE_URL");
28+
chatModel = new ChatOpenAI({
29+
modelName: model,
30+
...(baseURL && { configuration: { baseURL } }),
31+
});
32+
} else if (provider === "openai-compat") {
33+
const baseURL = configService.get("OPENAI_COMPAT_BASE_URL");
34+
const apiKey = configService.get("OPENAI_COMPAT_API_KEY");
35+
36+
if (!baseURL) {
37+
throw new Error(
38+
"OPENAI_COMPAT_BASE_URL is required for openai-compat provider"
39+
);
40+
}
41+
42+
chatModel = new ChatOpenAI({
43+
modelName: model,
44+
configuration: {
45+
baseURL,
46+
...(apiKey && { apiKey }),
47+
},
48+
});
2849
}
2950

3051
if (!chatModel) throw new Error();
3152

3253
return chatModel;
33-
} catch {
54+
} catch (error) {
55+
if (error instanceof Error && error.message) {
56+
throw error;
57+
}
3458
throw new Error(`${modelType} is not found. Please check your model name`);
3559
}
3660
},

0 commit comments

Comments
 (0)