Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Aug 24, 2025

Summary

This PR fixes the timeout issue with LM Studio and other OpenAI-compatible providers where setting apiRequestTimeout to 0 (intended to disable timeout) was causing an immediate timeout instead.

Problem

The OpenAI SDK interprets a timeout value of 0 as an immediate timeout rather than "no timeout". When users set apiRequestTimeout to 0 expecting to disable the timeout, requests would fail immediately.

Solution

  • Modified the timeout handling in LM Studio, OpenAI, and Ollama providers to pass undefined instead of 0 to the OpenAI client when the timeout is set to 0
  • The OpenAI SDK correctly interprets undefined as "no timeout"
  • Updated all related tests to verify the correct behavior

Changes

  • src/api/providers/lm-studio.ts: Convert 0 timeout to undefined
  • src/api/providers/openai.ts: Convert 0 timeout to undefined
  • src/api/providers/ollama.ts: Convert 0 timeout to undefined
  • src/api/providers/tests/*.spec.ts: Updated tests to expect undefined for zero timeout

Testing

  • ✅ All existing tests pass
  • ✅ Added/updated tests specifically for the zero timeout case
  • ✅ Linting and type checking pass

Fixes #7366


Important

Fixes timeout handling by converting 0 to undefined for OpenAI-compatible providers, ensuring no timeout is set.

  • Behavior:
    • Fixes timeout handling for LmStudioHandler, OpenAiHandler, and OllamaHandler by converting 0 timeout to undefined.
    • OpenAI SDK now correctly interprets undefined as no timeout.
  • Files:
    • lm-studio.ts, openai.ts, ollama.ts: Convert 0 timeout to undefined.
    • lm-studio-timeout.spec.ts, ollama-timeout.spec.ts, openai-timeout.spec.ts: Update tests to expect undefined for zero timeout.
  • Testing:
    • All existing tests pass.
    • Added/updated tests for zero timeout case.
    • Linting and type checking pass.

This description was created by Ellipsis for 3f98467. You can customize this summary. It will automatically update as commits are pushed.

- Pass undefined instead of 0 to OpenAI SDK when apiRequestTimeout is set to 0
- OpenAI SDK interprets 0 as immediate timeout rather than no timeout
- Fixes timeout issues with LM Studio and other OpenAI-compatible providers
- Updates tests to verify the correct behavior

Fixes #7366
@roomote roomote bot requested review from cte, jr and mrubens as code owners August 24, 2025 07:02
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. bug Something isn't working labels Aug 24, 2025
Copy link
Contributor Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.

apiKey: "noop",
timeout: getApiRequestTimeout(),
// OpenAI SDK expects undefined for no timeout, not 0
timeout: timeout === 0 ? undefined : timeout,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this pattern intentional? I notice that here in LM Studio and Ollama, we're doing the conversion inline, but in the OpenAI provider, we're using an intermediate variable clientTimeout. Would it be worth standardizing on one approach across all three providers for consistency?


const timeout = getApiRequestTimeout()
// OpenAI SDK expects undefined for no timeout, not 0
const clientTimeout = timeout === 0 ? undefined : timeout
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice use of an intermediate variable here! Though I'm wondering if we should consider extracting this conversion logic (0 to undefined) into a shared utility function since it's repeated in all three providers? Something like:

Suggested change
const clientTimeout = timeout === 0 ? undefined : timeout
const clientTimeout = normalizeOpenAITimeout(timeout)

Where normalizeOpenAITimeout could live in the utils folder and handle this conversion consistently.

apiKey: "ollama",
timeout: getApiRequestTimeout(),
// OpenAI SDK expects undefined for no timeout, not 0
timeout: timeout === 0 ? undefined : timeout,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same pattern as LM Studio - inline conversion. Could we consider using the same approach as the OpenAI provider with an intermediate variable for better readability across all providers?

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Aug 24, 2025
@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Aug 26, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Aug 26, 2025
@daniel-lxs daniel-lxs closed this Aug 27, 2025
@github-project-automation github-project-automation bot moved this from PR [Needs Prelim Review] to Done in Roo Code Roadmap Aug 27, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Aug 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working PR - Needs Preliminary Review size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

[BUG] Roo times out with LM Studio after ~5 minutes (cannot disable; long tasks cut off)

4 participants