fix: initialize max_tokens before try block in _calc_claude_tokens#2269
fix: initialize max_tokens before try block in _calc_claude_tokens#2269wishhyt wants to merge 1 commit intoqodo-ai:mainfrom
Conversation
If an exception is raised before the `max_tokens` assignment (e.g., anthropic import fails or the model key is missing from MAX_TOKENS), the except block references the unbound `max_tokens` variable, raising NameError and masking the original error. Initialize max_tokens to 0 before the try block as a safe fallback. Made-with: Cursor
Review Summary by QodoInitialize max_tokens before try block in _calc_claude_tokens
WalkthroughsDescription• Initialize max_tokens variable before try block to prevent NameError • Prevents masking of original exceptions with unbound variable error • Ensures graceful error handling when anthropic import or model lookup fails Diagramflowchart LR
A["_calc_claude_tokens method"] -->|Initialize max_tokens = 0| B["Before try block"]
B -->|Try to import anthropic| C["Import and lookup"]
C -->|Success| D["Use max_tokens value"]
C -->|Exception| E["Except block uses initialized value"]
E -->|Return gracefully| F["Error handled properly"]
File Changes1. pr_agent/algo/token_handler.py
|
Code Review by Qodo
1. No tests for _calc_claude_tokens
|
| def _calc_claude_tokens(self, patch: str) -> int: | ||
| max_tokens = 0 | ||
| try: |
There was a problem hiding this comment.
1. No tests for _calc_claude_tokens 📘 Rule violation ⛯ Reliability
This PR changes runtime error-handling behavior in _calc_claude_tokens (preventing a NameError and altering fallback behavior), but it does not add/update pytest coverage for these scenarios. Without tests, regressions in the Anthropic-token-counting fallback path may go undetected.
Agent Prompt
## Issue description
This PR changes `_calc_claude_tokens` error-path behavior, but there are no pytest tests added/updated to ensure the fallback behavior remains correct.
## Issue Context
The function dynamically imports `anthropic` and reads `MAX_TOKENS` and the configured model; exceptions in these steps are expected to return a safe fallback (`max_tokens`) without masking the original failure.
## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]
- tests/unittest/test_token_handler.py[1-200]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def _calc_claude_tokens(self, patch: str) -> int: | ||
| max_tokens = 0 | ||
| try: | ||
| import anthropic | ||
| from pr_agent.algo import MAX_TOKENS | ||
|
|
||
| client = anthropic.Anthropic(api_key=get_settings(use_context=False).get('anthropic.key')) | ||
| max_tokens = MAX_TOKENS[get_settings().config.model] |
There was a problem hiding this comment.
2. Zero tokens on failure 🐞 Bug ✓ Correctness
_calc_claude_tokens now returns 0 tokens when an exception occurs before max_tokens is assigned (e.g., client initialization failure or MAX_TOKENS[...] KeyError). Returning 0 for non-empty input can cause downstream code that trims/clips based on token counts to skip trimming and attempt oversized requests.
Agent Prompt
## Issue description
`_calc_claude_tokens` initializes `max_tokens` to `0` and returns it on exceptions that occur before `max_tokens` is assigned from `MAX_TOKENS[...]`. This can produce a 0 token count for non-empty text, which can bypass downstream clipping decisions.
## Issue Context
- Exceptions can occur before `max_tokens = MAX_TOKENS[...]` (e.g., Claude-like model name not in `MAX_TOKENS`, or client init failures).
- Downstream code (e.g., help docs prompt trimming) uses the returned token count to decide whether to clip input.
## Fix Focus Areas
- pr_agent/algo/token_handler.py[99-127]
## Implementation notes
- Prefer computing `max_tokens` *before* constructing the Anthropic client.
- Replace `MAX_TOKENS[model]` with a safe fallback:
- `MAX_TOKENS.get(model)` and/or `get_settings().config.custom_model_max_tokens`.
- If neither is available, either raise (consistent with `get_max_tokens`) or fallback to a local estimate like `len(self.encoder.encode(patch, disallowed_special=()))`.
- In the `except`, return the conservative `max_tokens`/estimate (not 0) so downstream trimming remains safe.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Summary
Changes
pr_agent/algo/token_handler.py: Initialize max_tokens = 0 before the try block as a safe fallback
Test plan