Skip to content

fix: use axios adapter for OpenAI SDK to support SOCKS5 proxies #7006

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/api/providers/__tests__/openai.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ describe("OpenAiHandler", () => {
"User-Agent": `RooCode/${Package.version}`,
},
timeout: expect.any(Number),
fetch: expect.any(Function),
})
})
})
Expand Down
8 changes: 8 additions & 0 deletions src/api/providers/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ import { DEFAULT_HEADERS } from "./constants"
import { BaseProvider } from "./base-provider"
import type { SingleCompletionHandler, ApiHandlerCreateMessageMetadata } from "../index"
import { getApiRequestTimeout } from "./utils/timeout-config"
import { createAxiosFetchAdapter, shouldUseAxiosForProxy } from "./utils/axios-fetch-adapter"

// TODO: Rename this to OpenAICompatibleHandler. Also, I think the
// `OpenAINativeHandler` can subclass from this, since it's obviously
Expand All @@ -49,6 +50,10 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl

const timeout = getApiRequestTimeout()

// Determine if we should use axios for proxy support
const useAxiosForProxy = this.options.openAiUseAxiosForProxy ?? shouldUseAxiosForProxy()
const customFetch = createAxiosFetchAdapter(useAxiosForProxy)

if (isAzureAiInference) {
// Azure AI Inference Service (e.g., for DeepSeek) uses a different path structure
this.client = new OpenAI({
Expand All @@ -57,6 +62,7 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
defaultHeaders: headers,
defaultQuery: { "api-version": this.options.azureApiVersion || "2024-05-01-preview" },
timeout,
fetch: customFetch,
})
} else if (isAzureOpenAi) {
// Azure API shape slightly differs from the core API shape:
Expand All @@ -67,13 +73,15 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
apiVersion: this.options.azureApiVersion || azureOpenAiDefaultApiVersion,
defaultHeaders: headers,
timeout,
fetch: customFetch,
})
} else {
this.client = new OpenAI({
baseURL,
apiKey,
defaultHeaders: headers,
timeout,
fetch: customFetch,
})
}
}
Expand Down
130 changes: 130 additions & 0 deletions src/api/providers/utils/axios-fetch-adapter.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
import axios, { AxiosRequestConfig, AxiosResponse } from "axios"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new file lacks dedicated unit tests. Would it be worth adding tests for stream conversion, error handling, and proxy detection to ensure the adapter works correctly in all scenarios?

import { Readable } from "stream"

/**
* Creates a fetch-compatible wrapper around axios for use with OpenAI SDK.
* This adapter allows axios to be used instead of the native fetch API,
* which is important for proxy support in VSCode extensions where the
* patched fetch may not work correctly with certain proxy configurations
* (particularly SOCKS5 proxies).
*
* @param useAxiosForProxy - If true, uses axios instead of native fetch
* @returns A fetch-compatible function
*/
export function createAxiosFetchAdapter(useAxiosForProxy: boolean = false): typeof fetch {
// If not using axios for proxy, return native fetch
if (!useAxiosForProxy) {
return fetch
}

// Return an axios-based fetch implementation
return async (input: RequestInfo | URL, init?: RequestInit): Promise<Response> => {
const url = typeof input === "string" ? input : input instanceof URL ? input.toString() : (input as Request).url
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When input is a Request object, we're not handling the case where the Request might have a body that needs to be properly extracted. Could we add handling for this case?

Suggested change
const url = typeof input === "string" ? input : input instanceof URL ? input.toString() : (input as Request).url
const url = typeof input === 'string' ? input : input instanceof URL ? input.toString() : (input as Request).url
const requestInit = input instanceof Request ? {
method: input.method,
headers: Object.fromEntries(input.headers.entries()),
body: await input.text() // or input.blob() depending on content type
} : init


// Convert RequestInit to AxiosRequestConfig
const config: AxiosRequestConfig = {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The axios adapter doesn't respect the timeout setting that's passed to the OpenAI client constructor. Consider adding timeout configuration to the AxiosRequestConfig:

Suggested change
const config: AxiosRequestConfig = {
const config: AxiosRequestConfig = {
url,
method: (init?.method || 'GET') as any,
headers: init?.headers as any,
data: init?.body,
timeout: /* get timeout from somewhere */,
responseType: 'stream',
decompress: false,
validateStatus: () => true,
}

url,
method: (init?.method || "GET") as any,
headers: init?.headers as any,
data: init?.body,
// Important: Set responseType to 'stream' for streaming responses
responseType: "stream",
// Disable automatic decompression to let the consumer handle it
decompress: false,
// Don't throw on HTTP error status codes
validateStatus: () => true,
}

try {
const axiosResponse: AxiosResponse<Readable> = await axios(config)

// Convert axios response to fetch Response
return createResponseFromAxios(axiosResponse)
} catch (error: any) {
// Handle network errors
throw new TypeError(`Failed to fetch: ${error.message}`)
}
}
}

/**
* Converts an Axios response to a fetch Response object
*/
function createResponseFromAxios(axiosResponse: AxiosResponse<Readable>): Response {
const { status, statusText, headers, data } = axiosResponse

// Convert Node.js Readable stream to Web ReadableStream
const readableStream = nodeStreamToWebStream(data)

// Create Response with proper headers
const responseHeaders = new Headers()
Object.entries(headers).forEach(([key, value]) => {
if (value !== undefined) {
responseHeaders.set(key, String(value))
}
})

return new Response(readableStream, {
status,
statusText,
headers: responseHeaders,
})
}

/**
* Converts a Node.js Readable stream to a Web ReadableStream
*/
function nodeStreamToWebStream(nodeStream: Readable): ReadableStream<Uint8Array> {
return new ReadableStream({
start(controller) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intentional? The Node.js stream event listeners are never removed if the ReadableStream is abandoned before completion. This could cause memory leaks. Consider adding cleanup logic:

Suggested change
start(controller) {
start(controller) {
const cleanup = () => {
nodeStream.removeAllListeners('data');
nodeStream.removeAllListeners('end');
nodeStream.removeAllListeners('error');
};
nodeStream.on('data', (chunk) => {
// ... existing code ...
});
nodeStream.on('end', () => {
controller.close();
cleanup();
});
nodeStream.on('error', (err) => {
controller.error(err);
cleanup();
});
},

nodeStream.on("data", (chunk) => {
// Ensure chunk is a Uint8Array
if (typeof chunk === "string") {
controller.enqueue(new TextEncoder().encode(chunk))
} else if (chunk instanceof Buffer) {
controller.enqueue(new Uint8Array(chunk))
} else {
controller.enqueue(chunk)
}
})

nodeStream.on("end", () => {
controller.close()
})

nodeStream.on("error", (err) => {
controller.error(err)
})
},
cancel() {
nodeStream.destroy()
},
})
}

/**
* Checks if the current environment suggests that axios should be used
* instead of fetch for proxy support. This can be based on:
* - Presence of proxy environment variables
* - VSCode proxy settings
* - User configuration
*/
export function shouldUseAxiosForProxy(): boolean {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proxy detection only checks environment variables but doesn't check VSCode's actual proxy settings. Could we enhance this to also check vscode.workspace.getConfiguration('http').get('proxy')? This would make the detection more comprehensive.

// Check for common proxy environment variables
const proxyVars = [
"HTTP_PROXY",
"http_proxy",
"HTTPS_PROXY",
"https_proxy",
"ALL_PROXY",
"all_proxy",
"NO_PROXY",
"no_proxy",
]

const hasProxyEnvVars = proxyVars.some((varName) => process.env[varName])
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shouldUseAxiosForProxy() function is called on every OpenAI handler instantiation. Could we consider caching this check for performance, especially since environment variables don't change during runtime?


// For now, we'll enable axios for proxy support if proxy env vars are detected
// This can be extended to check VSCode settings or user preferences
return hasProxyEnvVars
}
8 changes: 8 additions & 0 deletions src/shared/api.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@ export type ApiHandlerOptions = Omit<ProviderSettings, "apiProvider"> & {
* Defaults to true; set to false to disable summaries.
*/
enableGpt5ReasoningSummary?: boolean

/**
* When true, uses axios instead of native fetch for OpenAI API calls.
* This can help with proxy configurations where VSCode's patched fetch
* doesn't work correctly (e.g., SOCKS5 proxies).
* Defaults to auto-detection based on proxy environment variables.
*/
openAiUseAxiosForProxy?: boolean
}

// RouterName
Expand Down
Loading