-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: handle zero timeout correctly for OpenAI-compatible providers #7367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Pass undefined instead of 0 to OpenAI SDK when apiRequestTimeout is set to 0 - OpenAI SDK interprets 0 as immediate timeout rather than no timeout - Fixes timeout issues with LM Studio and other OpenAI-compatible providers - Updates tests to verify the correct behavior Fixes #7366
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.
| apiKey: "noop", | ||
| timeout: getApiRequestTimeout(), | ||
| // OpenAI SDK expects undefined for no timeout, not 0 | ||
| timeout: timeout === 0 ? undefined : timeout, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this pattern intentional? I notice that here in LM Studio and Ollama, we're doing the conversion inline, but in the OpenAI provider, we're using an intermediate variable clientTimeout. Would it be worth standardizing on one approach across all three providers for consistency?
|
|
||
| const timeout = getApiRequestTimeout() | ||
| // OpenAI SDK expects undefined for no timeout, not 0 | ||
| const clientTimeout = timeout === 0 ? undefined : timeout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice use of an intermediate variable here! Though I'm wondering if we should consider extracting this conversion logic (0 to undefined) into a shared utility function since it's repeated in all three providers? Something like:
| const clientTimeout = timeout === 0 ? undefined : timeout | |
| const clientTimeout = normalizeOpenAITimeout(timeout) |
Where normalizeOpenAITimeout could live in the utils folder and handle this conversion consistently.
| apiKey: "ollama", | ||
| timeout: getApiRequestTimeout(), | ||
| // OpenAI SDK expects undefined for no timeout, not 0 | ||
| timeout: timeout === 0 ? undefined : timeout, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same pattern as LM Studio - inline conversion. Could we consider using the same approach as the OpenAI provider with an intermediate variable for better readability across all providers?
Summary
This PR fixes the timeout issue with LM Studio and other OpenAI-compatible providers where setting
apiRequestTimeoutto 0 (intended to disable timeout) was causing an immediate timeout instead.Problem
The OpenAI SDK interprets a timeout value of
0as an immediate timeout rather than "no timeout". When users setapiRequestTimeoutto 0 expecting to disable the timeout, requests would fail immediately.Solution
undefinedinstead of0to the OpenAI client when the timeout is set to 0undefinedas "no timeout"Changes
Testing
Fixes #7366
Important
Fixes timeout handling by converting 0 to undefined for OpenAI-compatible providers, ensuring no timeout is set.
LmStudioHandler,OpenAiHandler, andOllamaHandlerby converting 0 timeout toundefined.undefinedas no timeout.lm-studio.ts,openai.ts,ollama.ts: Convert 0 timeout toundefined.lm-studio-timeout.spec.ts,ollama-timeout.spec.ts,openai-timeout.spec.ts: Update tests to expectundefinedfor zero timeout.This description was created by
for 3f98467. You can customize this summary. It will automatically update as commits are pushed.