Skip to content

fix: initialize max_tokens before try block in _calc_claude_tokens#2269

Open
wishhyt wants to merge 1 commit intoqodo-ai:mainfrom
wishhyt:fix/token-handler-unbound-variable
Open

fix: initialize max_tokens before try block in _calc_claude_tokens#2269
wishhyt wants to merge 1 commit intoqodo-ai:mainfrom
wishhyt:fix/token-handler-unbound-variable

Conversation

@wishhyt
Copy link
Copy Markdown

@wishhyt wishhyt commented Mar 18, 2026

Summary

  • In _calc_claude_tokens, the except block references max_tokens which is only assigned inside the try block at line 105
  • If an exception occurs before that assignment (e.g., anthropic import fails, API key is invalid, or model name is not in MAX_TOKENS), the except handler raises NameError: name 'max_tokens' is not defined
  • This masks the original error and crashes instead of gracefully handling the failure

Changes

pr_agent/algo/token_handler.py: Initialize max_tokens = 0 before the try block as a safe fallback

Test plan

  • Verify that _calc_claude_tokens handles missing anthropic dependency gracefully
  • Verify that normal Anthropic token counting still works correctly

If an exception is raised before the `max_tokens` assignment (e.g.,
anthropic import fails or the model key is missing from MAX_TOKENS),
the except block references the unbound `max_tokens` variable,
raising NameError and masking the original error. Initialize
max_tokens to 0 before the try block as a safe fallback.

Made-with: Cursor
@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Initialize max_tokens before try block in _calc_claude_tokens

🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Initialize max_tokens variable before try block to prevent NameError
• Prevents masking of original exceptions with unbound variable error
• Ensures graceful error handling when anthropic import or model lookup fails
Diagram
flowchart LR
  A["_calc_claude_tokens method"] -->|Initialize max_tokens = 0| B["Before try block"]
  B -->|Try to import anthropic| C["Import and lookup"]
  C -->|Success| D["Use max_tokens value"]
  C -->|Exception| E["Except block uses initialized value"]
  E -->|Return gracefully| F["Error handled properly"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/token_handler.py 🐞 Bug fix +2/-1

Initialize max_tokens variable before try block

• Initialize max_tokens = 0 before the try block in _calc_claude_tokens method
• Prevents NameError when exceptions occur before max_tokens assignment
• Ensures except block can safely reference max_tokens variable
• Allows original exceptions to be properly caught and logged

pr_agent/algo/token_handler.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects bot commented Mar 18, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (1) 📎 Requirement gaps (0)

Grey Divider


Action required

1. No tests for _calc_claude_tokens 📘 Rule violation ⛯ Reliability
Description
This PR changes runtime error-handling behavior in _calc_claude_tokens (preventing a NameError
and altering fallback behavior), but it does not add/update pytest coverage for these scenarios.
Without tests, regressions in the Anthropic-token-counting fallback path may go undetected.
Code

pr_agent/algo/token_handler.py[R99-101]

    def _calc_claude_tokens(self, patch: str) -> int:
+        max_tokens = 0
        try:
Evidence
PR Compliance ID 12 requires behavior changes to include corresponding pytest tests in the
appropriate tests directory. The PR modifies _calc_claude_tokens to change its exception-path
behavior, and the PR description’s test plan describes validations, but no test additions/updates
are included in the provided PR change set.

AGENTS.md
pr_agent/algo/token_handler.py[99-126]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
This PR changes `_calc_claude_tokens` error-path behavior, but there are no pytest tests added/updated to ensure the fallback behavior remains correct.

## Issue Context
The function dynamically imports `anthropic` and reads `MAX_TOKENS` and the configured model; exceptions in these steps are expected to return a safe fallback (`max_tokens`) without masking the original failure.

## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]
- tests/unittest/test_token_handler.py[1-200]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Zero tokens on failure 🐞 Bug ✓ Correctness
Description
_calc_claude_tokens now returns 0 tokens when an exception occurs before max_tokens is
assigned (e.g., client initialization failure or MAX_TOKENS[...] KeyError). Returning 0 for
non-empty input can cause downstream code that trims/clips based on token counts to skip trimming
and attempt oversized requests.
Code

pr_agent/algo/token_handler.py[R99-106]

    def _calc_claude_tokens(self, patch: str) -> int:
+        max_tokens = 0
        try:
            import anthropic
            from pr_agent.algo import MAX_TOKENS
-            
+
            client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key'))
            max_tokens = MAX_TOKENS[get_settings().config.model]
Evidence
The changed initialization max_tokens = 0 becomes the returned value for any exception thrown
before the later assignment to max_tokens, because the except handler returns max_tokens. This
is reachable for Claude-like models because model type detection is substring-based (`'claude' in
model_name) and the model-to-limit lookup uses a dict index (MAX_TOKENS[model]`) which can raise
KeyError. Downstream, help-docs prompt trimming uses the returned token count to decide whether to
clip; a 0 token count prevents clipping even when the text is large.

pr_agent/algo/token_handler.py[99-127]
pr_agent/algo/token_handler.py[12-20]
pr_agent/algo/token_handler.py[134-153]
pr_agent/tools/pr_help_docs.py[457-486]
pr_agent/algo/utils.py[991-1009]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`_calc_claude_tokens` initializes `max_tokens` to `0` and returns it on exceptions that occur before `max_tokens` is assigned from `MAX_TOKENS[...]`. This can produce a 0 token count for non-empty text, which can bypass downstream clipping decisions.

## Issue Context
- Exceptions can occur before `max_tokens = MAX_TOKENS[...]` (e.g., Claude-like model name not in `MAX_TOKENS`, or client init failures).
- Downstream code (e.g., help docs prompt trimming) uses the returned token count to decide whether to clip input.

## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]

## Implementation notes
- Prefer computing `max_tokens` *before* constructing the Anthropic client.
- Replace `MAX_TOKENS[model]` with a safe fallback:
 - `MAX_TOKENS.get(model)` and/or `get_settings().config.custom_model_max_tokens`.
 - If neither is available, either raise (consistent with `get_max_tokens`) or fallback to a local estimate like `len(self.encoder.encode(patch, disallowed_special=()))`.
- In the `except`, return the conservative `max_tokens`/estimate (not 0) so downstream trimming remains safe.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines 99 to 101
def _calc_claude_tokens(self, patch: str) -> int:
max_tokens = 0
try:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. No tests for _calc_claude_tokens 📘 Rule violation ⛯ Reliability

This PR changes runtime error-handling behavior in _calc_claude_tokens (preventing a NameError
and altering fallback behavior), but it does not add/update pytest coverage for these scenarios.
Without tests, regressions in the Anthropic-token-counting fallback path may go undetected.
Agent Prompt
## Issue description
This PR changes `_calc_claude_tokens` error-path behavior, but there are no pytest tests added/updated to ensure the fallback behavior remains correct.

## Issue Context
The function dynamically imports `anthropic` and reads `MAX_TOKENS` and the configured model; exceptions in these steps are expected to return a safe fallback (`max_tokens`) without masking the original failure.

## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]
- tests/unittest/test_token_handler.py[1-200]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 99 to 106
def _calc_claude_tokens(self, patch: str) -> int:
max_tokens = 0
try:
import anthropic
from pr_agent.algo import MAX_TOKENS

client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key'))
max_tokens = MAX_TOKENS[get_settings().config.model]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Zero tokens on failure 🐞 Bug ✓ Correctness

_calc_claude_tokens now returns 0 tokens when an exception occurs before max_tokens is
assigned (e.g., client initialization failure or MAX_TOKENS[...] KeyError). Returning 0 for
non-empty input can cause downstream code that trims/clips based on token counts to skip trimming
and attempt oversized requests.
Agent Prompt
## Issue description
`_calc_claude_tokens` initializes `max_tokens` to `0` and returns it on exceptions that occur before `max_tokens` is assigned from `MAX_TOKENS[...]`. This can produce a 0 token count for non-empty text, which can bypass downstream clipping decisions.

## Issue Context
- Exceptions can occur before `max_tokens = MAX_TOKENS[...]` (e.g., Claude-like model name not in `MAX_TOKENS`, or client init failures).
- Downstream code (e.g., help docs prompt trimming) uses the returned token count to decide whether to clip input.

## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]

## Implementation notes
- Prefer computing `max_tokens` *before* constructing the Anthropic client.
- Replace `MAX_TOKENS[model]` with a safe fallback:
  - `MAX_TOKENS.get(model)` and/or `get_settings().config.custom_model_max_tokens`.
  - If neither is available, either raise (consistent with `get_max_tokens`) or fallback to a local estimate like `len(self.encoder.encode(patch, disallowed_special=()))`.
- In the `except`, return the conservative `max_tokens`/estimate (not 0) so downstream trimming remains safe.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant