-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: handle zero timeout correctly for OpenAI-compatible providers #7367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -25,10 +25,12 @@ export class OllamaHandler extends BaseProvider implements SingleCompletionHandl | |
| super() | ||
| this.options = options | ||
|
|
||
| const timeout = getApiRequestTimeout() | ||
| this.client = new OpenAI({ | ||
| baseURL: (this.options.ollamaBaseUrl || "http://localhost:11434") + "/v1", | ||
| apiKey: "ollama", | ||
| timeout: getApiRequestTimeout(), | ||
| // OpenAI SDK expects undefined for no timeout, not 0 | ||
| timeout: timeout === 0 ? undefined : timeout, | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same pattern as LM Studio - inline conversion. Could we consider using the same approach as the OpenAI provider with an intermediate variable for better readability across all providers? |
||
| }) | ||
| } | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -48,6 +48,8 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl | |||||
| } | ||||||
|
|
||||||
| const timeout = getApiRequestTimeout() | ||||||
| // OpenAI SDK expects undefined for no timeout, not 0 | ||||||
| const clientTimeout = timeout === 0 ? undefined : timeout | ||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nice use of an intermediate variable here! Though I'm wondering if we should consider extracting this conversion logic (0 to undefined) into a shared utility function since it's repeated in all three providers? Something like:
Suggested change
Where |
||||||
|
|
||||||
| if (isAzureAiInference) { | ||||||
| // Azure AI Inference Service (e.g., for DeepSeek) uses a different path structure | ||||||
|
|
@@ -56,7 +58,7 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl | |||||
| apiKey, | ||||||
| defaultHeaders: headers, | ||||||
| defaultQuery: { "api-version": this.options.azureApiVersion || "2024-05-01-preview" }, | ||||||
| timeout, | ||||||
| timeout: clientTimeout, | ||||||
| }) | ||||||
| } else if (isAzureOpenAi) { | ||||||
| // Azure API shape slightly differs from the core API shape: | ||||||
|
|
@@ -66,14 +68,14 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl | |||||
| apiKey, | ||||||
| apiVersion: this.options.azureApiVersion || azureOpenAiDefaultApiVersion, | ||||||
| defaultHeaders: headers, | ||||||
| timeout, | ||||||
| timeout: clientTimeout, | ||||||
| }) | ||||||
| } else { | ||||||
| this.client = new OpenAI({ | ||||||
| baseURL, | ||||||
| apiKey, | ||||||
| defaultHeaders: headers, | ||||||
| timeout, | ||||||
| timeout: clientTimeout, | ||||||
| }) | ||||||
| } | ||||||
| } | ||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this pattern intentional? I notice that here in LM Studio and Ollama, we're doing the conversion inline, but in the OpenAI provider, we're using an intermediate variable
clientTimeout. Would it be worth standardizing on one approach across all three providers for consistency?