Skip to content

mito-ai: gpt-5.3-codex support#2178

Open
aarondr77 wants to merge 1 commit intodevfrom
gpt-5.3
Open

mito-ai: gpt-5.3-codex support#2178
aarondr77 wants to merge 1 commit intodevfrom
gpt-5.3

Conversation

@aarondr77
Copy link
Member

@aarondr77 aarondr77 commented Feb 6, 2026

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context, and the specification if there is one.

Testing

Please provide a list of the ways you can "access" or use the functionality. Please try and be exhaustive here, and make sure that you test everything you list.

  • I have tested this on real data that is reasonable and large
  • If I changed the interaction with JupyterLab, I tested that it does not break other programs (like VS Code), and tested that it works "multiple times" in the same notebook.

Documentation

Note if any new documentation needs to addressed or reviewed.


Note

Low Risk
Mostly additive model-constant and UI selection wiring changes with test coverage; limited behavioral impact beyond model selection and a small OpenAI parameter tweak.

Overview
Adds GPT 5.3 Codex as a selectable model across backend and frontend model lists, including OpenAI ordering and default model fallbacks.

Updates OpenAI request parameter building to apply reasoning_effort="low" for gpt-5.3-codex, and refines the UI model selector to better handle router-prefixed model IDs (LiteLLM/Abacus) by mapping display names to the actual returned model id. Tests are extended/updated to cover the new model and the adjusted OpenAI ordering assumptions.

Written by Cursor Bugbot for commit b5aa525. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Feb 6, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
monorepo Ready Ready Preview, Comment Feb 6, 2026 5:53pm

Request Review

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

}

if model == "gpt-5.2":
if model == "gpt-5.2" or model == "gpt-5.3-codex":
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Router-prefixed models miss reasoning_effort parameter setting

Medium Severity

The condition model == "gpt-5.2" or model == "gpt-5.3-codex" uses exact string matching, which fails for router-prefixed models like "Abacus/gpt-5.3-codex". In openai_client.py, the router prefix is stripped in _adjust_model_for_provider AFTER get_open_ai_completion_function_params is called, so Abacus-configured enterprise users using gpt-5.3-codex won't get the reasoning_effort parameter set, causing inconsistent behavior compared to standard deployments.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant