Skip to content

Roo Code disconnects before LM Studio can finish its responseΒ #6521

@dabockster

Description

@dabockster

App Version

3.25.4

API Provider

LM Studio

Model Used

Deepseek R1 14b/32b, Qwen2.5 Coder Instruct 32b - all GGUF from unsloth and bartowski on HuggingFace

Roo Code Task Links (Optional)

No response

πŸ” Steps to Reproduce

  1. Connect Roo Code to a sufficiently large model on LM Studio where processing needs to be split between the GPU and CPU.
  2. Wait for LM Studio to process the input and relay the output.
  3. Roo Code will time out after 1-2 minutes before LM Studio can fully process my input - sometimes making it drop its context and start processing from the beginning.

πŸ’₯ Outcome Summary

Expected Roo Code to wait for LM Studio to compute its response instead of timing out, regardless of how long that amount of time might be. I care about accuracy and its ability to execute using what's available on my local system, and not execution speed at all.

When it has ran, though, it has been excellent! I just can't have it timing out and retrying 4+ times before LM Studio has enough context built up. It feels like that timeout was built for a cloud service and not a local program.

πŸ“„ Relevant Logs or Errors (Optional)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Issue - Needs ScopingValid, but needs effort estimate or design input before work can start.bugSomething isn't working

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions