Skip to content

Local model base_url not working #7152

@JimmyBenHur

Description

@JimmyBenHur

What version of Codex is running?

codex-cli 0.63.0

What subscription do you have?

ChatGPT Plus

Which model were you using?

qwen3-coder-30b

What platform is your computer?

Microsoft Windows NT 10.0.26100.0 x64

What issue are you seeing?

When using a local model with LM Studio, which I have hosted on a different PC in the same network, the base_url, which contains the IP adress of the other computer, is ignored. This is my config.toml:

model_provider="lmstudio"
model = "qwen3-coder-30b"

[model_providers.lmstudio]
name = "lmstudio"
base_url = "http://xxx.xxx.xx.xx:1234/v1"

[profiles.qwen3-coder-30b]
model_provider = "lmstudio"
model = "qwen/qwen3-coder-30b"

After the CLI fails to connect it shows this error: Connection failed: error sending request for url (http://localhost:1234/v1/responses), which means that it didn't use the IP from the base_url but the localhost URL.

What steps can reproduce the bug?

Host a local model in you local network on a different machine and do the config as above.

What is the expected behavior?

It is supposed to use the base_url, otherwise it is not able to connect, since the URL is false.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    CLIIssues related to the Codex CLIbugSomething isn't workingcustom-modelIssues related to custom model providers (including local models)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions