chore(models): register gpt-5.4 + gpt-5.4-pro in OpenAI catalogue#432
chore(models): register gpt-5.4 + gpt-5.4-pro in OpenAI catalogue#432davort wants to merge 1 commit intoBeehiveInnovations:mainfrom
Conversation
Live on OpenAI API since 2026-03-05 but not yet in PAL's upstream catalogue. Score 19 + 20 (pro variant). Both use /v1/responses with reasoning effort high. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
There was a problem hiding this comment.
Code Review
This pull request adds configuration for the gpt-5.4 and gpt-5.4-pro models to the OpenAI model catalogue. Feedback includes a reminder to update the preference logic in providers/openai.py to utilize these new models, a warning about a potential configuration mismatch regarding streaming support for gpt-5.4 when using the responses API, and a suggestion to add a short-form alias for the Pro model for consistency.
| "temperature_constraint": "fixed" | ||
| }, | ||
| { | ||
| "model_name": "gpt-5.4", |
There was a problem hiding this comment.
The addition of these flagship models (gpt-5.4 and gpt-5.4-pro) to the catalogue is incomplete without updating the preference logic in providers/openai.py.
Since these models have the highest intelligence scores (19 and 20), they should be added to the top of the preference lists for ToolModelCategory.EXTENDED_REASONING and ToolModelCategory.BALANCED in providers/openai.py. Without this change, the system will continue to default to older models like gpt-5.1-codex or gpt-5.2 even when these superior models are available.
| "max_output_tokens": 128000, | ||
| "supports_extended_thinking": true, | ||
| "supports_system_prompts": true, | ||
| "supports_streaming": true, |
There was a problem hiding this comment.
There is a potential configuration mismatch for gpt-5.4. You have enabled supports_streaming while also setting use_openai_response_api: true (line 285).
In this configuration file, all other models that use the /v1/responses API (such as gpt-5.2-pro and gpt-5.1-codex) have streaming disabled. Conversely, models that support streaming (like gpt-5.1-codex-mini) do not use the /responses API. If the /v1/responses endpoint does not support server-sent events (SSE) for this model, enabling streaming here will cause runtime failures in the chat tool.
| "aliases": [ | ||
| "gpt5.4-pro", | ||
| "gpt5.4pro", | ||
| "gpt-5.4-pro" | ||
| ], |
Summary
Adds entries for
gpt-5.4andgpt-5.4-protoconf/openai_models.json. These models have been live on OpenAI's/v1/responsesendpoint since 2026-03-05 but aren't yet in PAL's catalogue, so users currently have to bypass PAL or hand-edit config to use them.highhighBoth use
use_openai_response_api: trueandtemperature_constraint: "fixed", matching the existing GPT-5.x entries in the file. Aliases follow the same pattern as neighbouring entries (gpt5.4,5.4, etc.).Verification
Tested locally against the live API:
Supported
reasoning.effortvalues for both models:none|low|medium|high|xhigh(note:minimalis rejected, unlike some earlier GPT-5 variants).Test plan
mcp__pal__chatafter PAL restart/v1/responsesworks as fallback