Skip to content

chore(models): register gpt-5.4 + gpt-5.4-pro in OpenAI catalogue#432

Open
davort wants to merge 1 commit intoBeehiveInnovations:mainfrom
davort:local/gpt-5.4-models
Open

chore(models): register gpt-5.4 + gpt-5.4-pro in OpenAI catalogue#432
davort wants to merge 1 commit intoBeehiveInnovations:mainfrom
davort:local/gpt-5.4-models

Conversation

@davort
Copy link
Copy Markdown

@davort davort commented Apr 16, 2026

Summary

Adds entries for gpt-5.4 and gpt-5.4-pro to conf/openai_models.json. These models have been live on OpenAI's /v1/responses endpoint since 2026-03-05 but aren't yet in PAL's catalogue, so users currently have to bypass PAL or hand-edit config to use them.

  • gpt-5.4 — score 19, 400K context / 128K output, vision + reasoning, default effort high
  • gpt-5.4-pro — score 20, 400K context / 272K output, no streaming, default effort high

Both use use_openai_response_api: true and temperature_constraint: "fixed", matching the existing GPT-5.x entries in the file. Aliases follow the same pattern as neighbouring entries (gpt5.4, 5.4, etc.).

Verification

Tested locally against the live API:

curl -sS -X POST https://api.openai.com/v1/responses \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-5.4","input":[{"role":"user","content":"reply with the single word: pong"}],"reasoning":{"effort":"low"}}'
# → status: completed, model: gpt-5.4-2026-03-05, text: "pong"

Supported reasoning.effort values for both models: none|low|medium|high|xhigh (note: minimal is rejected, unlike some earlier GPT-5 variants).

Test plan

  • Models load via mcp__pal__chat after PAL restart
  • Direct curl to /v1/responses works as fallback
  • Maintainer review of catalogue conventions / aliases

Live on OpenAI API since 2026-03-05 but not yet in PAL's upstream catalogue. Score 19 + 20 (pro variant). Both use /v1/responses with reasoning effort high.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds configuration for the gpt-5.4 and gpt-5.4-pro models to the OpenAI model catalogue. Feedback includes a reminder to update the preference logic in providers/openai.py to utilize these new models, a warning about a potential configuration mismatch regarding streaming support for gpt-5.4 when using the responses API, and a suggestion to add a short-form alias for the Pro model for consistency.

Comment thread conf/openai_models.json
"temperature_constraint": "fixed"
},
{
"model_name": "gpt-5.4",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The addition of these flagship models (gpt-5.4 and gpt-5.4-pro) to the catalogue is incomplete without updating the preference logic in providers/openai.py.

Since these models have the highest intelligence scores (19 and 20), they should be added to the top of the preference lists for ToolModelCategory.EXTENDED_REASONING and ToolModelCategory.BALANCED in providers/openai.py. Without this change, the system will continue to default to older models like gpt-5.1-codex or gpt-5.2 even when these superior models are available.

Comment thread conf/openai_models.json
"max_output_tokens": 128000,
"supports_extended_thinking": true,
"supports_system_prompts": true,
"supports_streaming": true,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a potential configuration mismatch for gpt-5.4. You have enabled supports_streaming while also setting use_openai_response_api: true (line 285).

In this configuration file, all other models that use the /v1/responses API (such as gpt-5.2-pro and gpt-5.1-codex) have streaming disabled. Conversely, models that support streaming (like gpt-5.1-codex-mini) do not use the /responses API. If the /v1/responses endpoint does not support server-sent events (SSE) for this model, enabling streaming here will cause runtime failures in the chat tool.

Comment thread conf/openai_models.json
Comment on lines +293 to +297
"aliases": [
"gpt5.4-pro",
"gpt5.4pro",
"gpt-5.4-pro"
],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To maintain consistency with the gpt-5.4 and gpt-5.2 entries, consider adding a short-form alias for the Pro model.

Suggested change
"aliases": [
"gpt5.4-pro",
"gpt5.4pro",
"gpt-5.4-pro"
],
"aliases": [
"gpt5.4-pro",
"gpt5.4pro",
"gpt-5.4-pro",
"5.4-pro"
],

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant