Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Aug 12, 2025

Description

This PR fixes an issue where the OpenAI Compatible provider's model fetching doesn't respect VSCode's proxy settings (like PAC files), while the chat completions API calls do.

Problem

  • The getOpenAiModels function uses axios.get to fetch available models
  • The OpenAI SDK (used for chat completions) uses the native fetch API internally
  • axios doesn't automatically respect VSCode's proxy settings, but fetch does
  • This causes model discovery to fail when behind a proxy, even though chat completions work

Solution

  • Replace axios.get with fetch in getOpenAiModels function
  • Also updated getLmStudioModels for consistency
  • Updated tests to mock fetch instead of axios

Testing

  • All existing tests pass
  • The change ensures both model fetching and chat completions use the same HTTP mechanism

Fixes #6991


Important

Replaced axios with fetch in getOpenAiModels and getLmStudioModels to respect VSCode's proxy settings, updating tests accordingly.

  • Behavior:
    • Replace axios.get with fetch in getOpenAiModels and getLmStudioModels to respect VSCode's proxy settings.
    • Ensures model fetching and chat completions use the same HTTP mechanism.
  • Testing:
    • Update tests in openai.spec.ts to mock fetch instead of axios.
    • Tests verify correct handling of URLs, headers, and error scenarios.
  • Misc:
    • Remove axios import from openai.ts and lm-studio.ts.

This description was created by Ellipsis for 61d3f94. You can customize this summary. It will automatically update as commits are pushed.

- Replace axios.get with fetch in getOpenAiModels function
- Replace axios.get with fetch in getLmStudioModels function
- Update tests to mock fetch instead of axios
- This ensures model fetching respects VSCode proxy settings like PAC files

Fixes #6991
@roomote roomote bot requested review from cte, jr and mrubens as code owners August 12, 2025 14:07
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Aug 12, 2025
@dosubot dosubot bot added the bug Something isn't working label Aug 12, 2025
Copy link
Contributor Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.

})

if (!response.ok) {
return []
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding error logging here to help debug issues when model fetching fails. Currently we silently return an empty array for non-ok responses, which might make troubleshooting difficult:

Suggested change
return []
if (!response.ok) {
console.error(`Failed to fetch OpenAI models: ${response.status} ${response.statusText}`);
return []
}

// This matches how the OpenAI SDK makes requests internally
const response = await fetch(`${trimmedBaseUrl}/models`, {
method: "GET",
headers: headers,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency with the LM Studio implementation, should we include a Content-Type header here? While it's not strictly necessary for GET requests, maintaining consistency across our fetch implementations would be good:

Suggested change
headers: headers,
const response = await fetch(`${trimmedBaseUrl}/models`, {
method: "GET",
headers: {
"Content-Type": "application/json",
...headers
},
})

if (Object.keys(headers).length > 0) {
config["headers"] = headers
// Use fetch instead of axios to respect VSCode's proxy settings
// This matches how the OpenAI SDK makes requests internally
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be helpful to add a reference to the issue this fixes for future maintainability?

Suggested change
// This matches how the OpenAI SDK makes requests internally
// Use fetch instead of axios to respect VSCode's proxy settings
// This matches how the OpenAI SDK makes requests internally
// Fixes #6991: Model fetching now respects VSCode proxy settings (PAC files, etc.)

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Aug 12, 2025
@daniel-lxs daniel-lxs closed this Aug 13, 2025
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Aug 13, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Aug 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

Generic "connection error" -- Debug steps?

4 participants