-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: replace axios with fetch to respect VSCode proxy settings #6994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Replace axios.get with fetch in getOpenAiModels function - Replace axios.get with fetch in getLmStudioModels function - Update tests to mock fetch instead of axios - This ensures model fetching respects VSCode proxy settings like PAC files Fixes #6991
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.
| }) | ||
|
|
||
| if (!response.ok) { | ||
| return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error logging here to help debug issues when model fetching fails. Currently we silently return an empty array for non-ok responses, which might make troubleshooting difficult:
| return [] | |
| if (!response.ok) { | |
| console.error(`Failed to fetch OpenAI models: ${response.status} ${response.statusText}`); | |
| return [] | |
| } |
| // This matches how the OpenAI SDK makes requests internally | ||
| const response = await fetch(`${trimmedBaseUrl}/models`, { | ||
| method: "GET", | ||
| headers: headers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with the LM Studio implementation, should we include a Content-Type header here? While it's not strictly necessary for GET requests, maintaining consistency across our fetch implementations would be good:
| headers: headers, | |
| const response = await fetch(`${trimmedBaseUrl}/models`, { | |
| method: "GET", | |
| headers: { | |
| "Content-Type": "application/json", | |
| ...headers | |
| }, | |
| }) |
| if (Object.keys(headers).length > 0) { | ||
| config["headers"] = headers | ||
| // Use fetch instead of axios to respect VSCode's proxy settings | ||
| // This matches how the OpenAI SDK makes requests internally |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be helpful to add a reference to the issue this fixes for future maintainability?
| // This matches how the OpenAI SDK makes requests internally | |
| // Use fetch instead of axios to respect VSCode's proxy settings | |
| // This matches how the OpenAI SDK makes requests internally | |
| // Fixes #6991: Model fetching now respects VSCode proxy settings (PAC files, etc.) |
Description
This PR fixes an issue where the OpenAI Compatible provider's model fetching doesn't respect VSCode's proxy settings (like PAC files), while the chat completions API calls do.
Problem
getOpenAiModelsfunction usesaxios.getto fetch available modelsfetchAPI internallyaxiosdoesn't automatically respect VSCode's proxy settings, butfetchdoesSolution
axios.getwithfetchingetOpenAiModelsfunctiongetLmStudioModelsfor consistencyfetchinstead ofaxiosTesting
Fixes #6991
Important
Replaced
axioswithfetchingetOpenAiModelsandgetLmStudioModelsto respect VSCode's proxy settings, updating tests accordingly.axios.getwithfetchingetOpenAiModelsandgetLmStudioModelsto respect VSCode's proxy settings.openai.spec.tsto mockfetchinstead ofaxios.axiosimport fromopenai.tsandlm-studio.ts.This description was created by
for 61d3f94. You can customize this summary. It will automatically update as commits are pushed.