Skip to content

Add OpenAI's GPT-5.4 model#2247

Merged
naorpeled merged 1 commit intoqodo-ai:mainfrom
PeterDaveHelloKitchen:gpt-5-4-model-support
Mar 6, 2026
Merged

Add OpenAI's GPT-5.4 model#2247
naorpeled merged 1 commit intoqodo-ai:mainfrom
PeterDaveHelloKitchen:gpt-5-4-model-support

Conversation

@PeterDaveHello
Copy link
Copy Markdown
Contributor

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add OpenAI's GPT-5.4 model support

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add GPT-5.4 model support with 272K token context window
• Register both base and dated model variants in token limits
• Add unit tests for GPT-5.4 token limit validation
• Update GPT-5 model detection tests to include new variants
Diagram
flowchart LR
  A["GPT-5.4 Models"] --> B["Token Limits Config"]
  A --> C["Unit Tests"]
  B --> D["272K Context Window"]
  C --> E["Model Validation"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/__init__.py ✨ Enhancement +2/-0

Register GPT-5.4 models with token limits

• Add gpt-5.4 model with 272K token limit
• Add gpt-5.4-2026-03-05 dated variant with 272K token limit
• Include comments explaining 272K as safe default without opt-in 1M context

pr_agent/algo/init.py


2. tests/unittest/test_get_max_tokens.py 🧪 Tests +13/-0

Add GPT-5.4 token limit unit tests

• Add parametrized test for both GPT-5.4 model variants
• Verify both gpt-5.4 and gpt-5.4-2026-03-05 return 272000 tokens
• Test with mocked settings to validate token limit retrieval

tests/unittest/test_get_max_tokens.py


3. tests/unittest/test_litellm_reasoning_effort.py 🧪 Tests +1/-1

Update GPT-5 model detection tests

• Update GPT-5 model detection test to include new variants
• Add gpt-5.4 and gpt-5.4-2026-03-05 to model version test list
• Ensure reasoning_effort logic applies to new GPT-5.4 models

tests/unittest/test_litellm_reasoning_effort.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 6, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. gpt5_models line over 120 📘 Rule violation ✓ Correctness
Description
The modified gpt5_models assignment is a single line longer than the repository's Ruff
line-length = 120, which can cause lint failures. Reformatting this list into a multi-line
structure will restore compliance.
Code

tests/unittest/test_litellm_reasoning_effort.py[310]

+        gpt5_models = ["gpt-5-2025-08-07", "gpt-5.1", "gpt-5.4", "gpt-5.4-2026-03-05", "gpt-5-turbo", "gpt-5.1-codex", "gpt-5.3-codex"]
Evidence
PR Compliance ID 8 requires adherence to Ruff style including 120-character lines; the repo config
sets line-length = 120 and the changed gpt5_models assignment is a long single line that exceeds
this limit.

AGENTS.md
pyproject.toml[47-49]
tests/unittest/test_litellm_reasoning_effort.py[310-310]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The updated `gpt5_models` assignment exceeds the repository Ruff `line-length = 120` requirement, which may fail lint checks.
## Issue Context
Ruff is configured with `line-length = 120` in `pyproject.toml`.
## Fix Focus Areas
- tests/unittest/test_litellm_reasoning_effort.py[310-310]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. MAX_TOKENS hardcodes gpt-5.4 📘 Rule violation ⛯ Reliability ⭐ New
Description
The PR introduces hardcoded runtime token limits for gpt-5.4 models in code instead of using the
repo’s TOML/Dynaconf override mechanisms. This makes runtime behavior harder to change/review and
conflicts with the configuration-driven requirement.
Code

pr_agent/algo/init.py[R45-46]

+    'gpt-5.4': 272000,  # 272K safe default without opt-in 1M context parameters
+    'gpt-5.4-2026-03-05': 272000,  # 272K safe default without opt-in 1M context parameters
Evidence
Compliance requires avoiding newly introduced hardcoded configuration values in application logic
and using .pr_agent.toml or pr_agent/settings/ overrides instead. The added gpt-5.4 entries
hardcode max token values directly in MAX_TOKENS.

AGENTS.md
pr_agent/algo/init.py[45-46]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
New runtime behavior for `gpt-5.4` max tokens is hardcoded in `MAX_TOKENS` rather than being configurable via `.pr_agent.toml` or `pr_agent/settings/*.toml` overrides.

## Issue Context
The repo already supports config-based limits via `config.max_model_tokens` / `config.custom_model_max_tokens`. Add a config-driven mechanism for per-model max tokens (or a model override entry) and have code consult it first.

## Fix Focus Areas
- pr_agent/algo/__init__.py[45-46]
- pr_agent/algo/utils.py[994-1012]
- pr_agent/settings/configuration.toml[29-34]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Duplicated fake_settings construction 📘 Rule violation ✓ Correctness ⭐ New
Description
The new unit test duplicates the fake_settings construction pattern instead of reusing a
helper/fixture, increasing maintenance cost and reducing readability. This violates the expectation
to reduce duplication and clarify control flow in touched code.
Code

tests/unittest/test_get_max_tokens.py[R25-32]

+    @pytest.mark.parametrize("model", ["gpt-5.4", "gpt-5.4-2026-03-05"])
+    def test_gpt54_model_max_tokens(self, monkeypatch, model):
+        fake_settings = type('', (), {
+            'config': type('', (), {
+                'custom_model_max_tokens': 0,
+                'max_model_tokens': 0
+            })()
+        })()
Evidence
The compliance rule requires reducing duplication by extracting repeated logic into
variables/helpers/fixtures. The added test introduces another inline fake_settings = type(...
block duplicating the same setup already used in other tests in this file.

tests/unittest/test_get_max_tokens.py[25-32]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new test adds another inline `fake_settings` construction block, duplicating the same setup repeated across tests.

## Issue Context
Use a pytest fixture (e.g., `@pytest.fixture` returning `fake_settings`) or a small helper function inside the test module/class to generate settings, and reuse it across tests.

## Fix Focus Areas
- tests/unittest/test_get_max_tokens.py[25-37]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit 58db87d

Results up to commit 1bcefef


🐞 Bugs (0) 📘 Rule violations (1) 📎 Requirement gaps (0)

Grey Divider
Action required
1. gpt5_models line over 120 📘 Rule violation ✓ Correctness
Description
The modified gpt5_models assignment is a single line longer than the repository's Ruff
line-length = 120, which can cause lint failures. Reformatting this list into a multi-line
structure will restore compliance.
Code

tests/unittest/test_litellm_reasoning_effort.py[310]

+        gpt5_models = ["gpt-5-2025-08-07", "gpt-5.1", "gpt-5.4", "gpt-5.4-2026-03-05", "gpt-5-turbo", "gpt-5.1-codex", "gpt-5.3-codex"]
Evidence
PR Compliance ID 8 requires adherence to Ruff style including 120-character lines; the repo config
sets line-length = 120 and the changed gpt5_models assignment is a long single line that exceeds
this limit.

AGENTS.md
pyproject.toml[47-49]
tests/unittest/test_litellm_reasoning_effort.py[310-310]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The updated `gpt5_models` assignment exceeds the repository Ruff `line-length = 120` requirement, which may fail lint checks.

## Issue Context
Ruff is configured with `line-length = 120` in `pyproject.toml`.

## Fix Focus Areas
- tests/unittest/test_litellm_reasoning_effort.py[310-310]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider Grey Divider

Qodo Logo

@PeterDaveHello
Copy link
Copy Markdown
Contributor Author

@naorpeled this one should be good 👍

@PeterDaveHello PeterDaveHello force-pushed the gpt-5-4-model-support branch from 1bcefef to 58db87d Compare March 6, 2026 17:01
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 6, 2026

Persistent review updated to latest commit 58db87d

@naorpeled naorpeled merged commit 80ec6a5 into qodo-ai:main Mar 6, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants