-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix: append /v1 to OpenAI Compatible base URLs for llama.cpp compatibility #7066
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
@@ -57,7 +57,23 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio | |||||||||
this.options.enableGpt5ReasoningSummary = true | ||||||||||
} | ||||||||||
const apiKey = this.options.openAiNativeApiKey ?? "not-provided" | ||||||||||
this.client = new OpenAI({ baseURL: this.options.openAiNativeBaseUrl, apiKey }) | ||||||||||
|
||||||||||
// Handle base URL - append /v1 if it's a plain URL without a path | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Consider adding a comment explaining why
Suggested change
|
||||||||||
let baseURL = this.options.openAiNativeBaseUrl | ||||||||||
if (baseURL && !baseURL.includes("/v1")) { | ||||||||||
// Check if it's a URL without any path (just protocol://host:port) | ||||||||||
try { | ||||||||||
const url = new URL(baseURL) | ||||||||||
// If the pathname is just '/', append /v1 | ||||||||||
if (url.pathname === "/") { | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The current implementation only appends |
||||||||||
baseURL = baseURL.replace(/\/$/, "") + "/v1" | ||||||||||
} | ||||||||||
} catch { | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When URL parsing fails, the code silently continues with the original URL. Would it be helpful to log a warning so users know their URL might not be processed as expected?
Suggested change
|
||||||||||
// If URL parsing fails, leave it as is | ||||||||||
} | ||||||||||
} | ||||||||||
|
||||||||||
this.client = new OpenAI({ baseURL, apiKey }) | ||||||||||
} | ||||||||||
|
||||||||||
private normalizeGpt5Usage(usage: any, model: OpenAiNativeModel): ApiStreamUsageChunk | undefined { | ||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests verify that the handler instantiates correctly, but they don't actually verify that the
/v1
is appended to the client's baseURL. Since the OpenAI client is mocked, we can't directly inspect its configuration. Could we consider adding a way to verify the actual URL being used, perhaps by checking the mock calls or adding integration tests?