Skip to content

Conversation

@5m2wse50
Copy link

@5m2wse50 5m2wse50 commented Oct 17, 2025

This PR introduces two improvements to model handling:

  1. Updates OpenAI Models: Adds gpt-4o and gpt-4-turbo to the list of known models, allowing for accurate token limit calculations for these newer models.
  2. Graceful Handling of Unknown Models: The splitter no longer raises a KeyError when an unknown model name is used. Instead, it falls back to a default tokenizer (cl100k_base) and prints a warning to stderr. This makes the tool more robust.

Includes new tests to verify both functionalities.

Summary by Sourcery

Update the list of known OpenAI models to include gpt-4o and gpt-4-turbo, ensure accurate token limit calculations for them, and handle unrecognized model names gracefully by using a default tokenizer and emitting a warning.

New Features:

  • Recognize gpt-4o and gpt-4-turbo models with correct token limits

Bug Fixes:

  • Gracefully handle unknown model names by falling back to default tokenizer and printing a warning instead of raising a KeyError

Tests:

  • Add tests for new OpenAI models' token limits and for warning on unknown models

@sourcery-ai
Copy link

sourcery-ai bot commented Oct 17, 2025

Reviewer's Guide

This PR enhances model support by extending the OPENAI_MODELS mapping with two new high-capacity variants and hardening the tokenizer initialization to gracefully handle unknown model names by falling back to a default encoding and issuing a warning. It also introduces tests to verify both the updated model limits and the warning behavior for unrecognized models.

Sequence diagram for graceful handling of unknown model names

sequenceDiagram
    participant User
    participant MarkdownLLMSplitter
    participant tiktoken
    participant sys.stderr
    User->>MarkdownLLMSplitter: Instantiate with unknown gptok_model
    MarkdownLLMSplitter->>tiktoken: encoding_for_model(gptok_model)
    tiktoken-->>MarkdownLLMSplitter: KeyError
    MarkdownLLMSplitter->>tiktoken: get_encoding("cl100k_base")
    MarkdownLLMSplitter->>sys.stderr: Print warning
    MarkdownLLMSplitter-->>User: Instance ready (with fallback encoding)
Loading

Entity relationship diagram for updated OPENAI_MODELS mapping

erDiagram
    OPENAI_MODELS {
        string model_name
        int token_limit
    }
    MarkdownLLMSplitter {
        string gptok_model
        int gptok_limit
    }
    OPENAI_MODELS ||--o| MarkdownLLMSplitter : "used for token limits"
Loading

Class diagram for updated MarkdownLLMSplitter initialization

classDiagram
class MarkdownLLMSplitter {
  - gptoker
  - gptok_limit
  - md_meta
  - md_str
  + __init__(gptok_model: str = "gpt-3.5-turbo", gptok_limit: int = None)
}
MarkdownLLMSplitter : +__init__() uses tiktoken.encoding_for_model()
MarkdownLLMSplitter : +__init__() falls back to tiktoken.get_encoding("cl100k_base") on KeyError
MarkdownLLMSplitter : +__init__() prints warning to sys.stderr if model unknown
Loading

File-Level Changes

Change Details Files
Include GPT-4 variants in known model limits
  • Added gpt-4o and gpt-4-turbo entries to the OPENAI_MODELS dict with a 128k token limit
src/split_markdown4gpt/splitter.py
Fallback and warning for unknown models
  • Wrapped encoding_for_model call in try/except to catch KeyError
  • On failure, default to cl100k_base encoding
  • Print warning to stderr when model not in OPENAI_MODELS
src/split_markdown4gpt/splitter.py
Tests for new and unknown model handling
  • Added test verifying gpt-4o and gpt-4-turbo token limits
  • Added test capturing stderr warning for an unrecognized model
tests/test_splitter.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@qodo-code-review
Copy link

qodo-code-review bot commented Oct 17, 2025

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🎫 No ticket provided
- [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true -->

</details></td></tr>
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `tests/test_splitter.py:21-30` </location>
<code_context>
+    splitter_4_turbo = MarkdownLLMSplitter(gptok_model="gpt-4-turbo")
+    assert splitter_4_turbo.gptok_limit == 128000
+
+def test_unknown_model_warning():
+    """Test that a warning is printed for unknown models."""
+    # Redirect stderr to capture the warning message
+    old_stderr = sys.stderr
+    sys.stderr = captured_stderr = StringIO()
+
+    MarkdownLLMSplitter(gptok_model="claude-3-opus-20240229")
+
+    # Restore stderr
+    sys.stderr = old_stderr
+
+    warning_message = captured_stderr.getvalue()
+    assert "Warning: Model 'claude-3-opus-20240229' not found" in warning_message
</code_context>

<issue_to_address>
**suggestion (testing):** Test does not assert the fallback tokenizer or token limit for unknown models.

Please add assertions to verify that the fallback tokenizer ('cl100k_base') is used and that the default token limit is correctly set when an unknown model is provided.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@qodo-code-review
Copy link

qodo-code-review bot commented Oct 17, 2025

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Use a more sensible default limit

The fallback for unknown models uses a hardcoded token limit of 2048. This
should be increased to a more modern default, like 8192, to better suit models
that use the cl100k_base tokenizer.

Examples:

src/split_markdown4gpt/splitter.py [86]
        self.gptok_limit = gptok_limit or OPENAI_MODELS.get(gptok_model, 2048)

Solution Walkthrough:

Before:

class MarkdownLLMSplitter:
    def __init__(
        self, gptok_model: str = "gpt-3.5-turbo", gptok_limit: int = None
    ) -> None:
        try:
            self.gptoker = tiktoken.encoding_for_model(gptok_model)
        except KeyError:
            self.gptoker = tiktoken.get_encoding("cl100k_base")
        if gptok_model not in OPENAI_MODELS:
            print(f"Warning: Model '{gptok_model}' not found...")
        
        # For unknown models, this defaults to 2048
        self.gptok_limit = gptok_limit or OPENAI_MODELS.get(gptok_model, 2048)

After:

class MarkdownLLMSplitter:
    def __init__(
        self, gptok_model: str = "gpt-3.5-turbo", gptok_limit: int = None
    ) -> None:
        try:
            self.gptoker = tiktoken.encoding_for_model(gptok_model)
        except KeyError:
            self.gptoker = tiktoken.get_encoding("cl100k_base")
        if gptok_model not in OPENAI_MODELS:
            print(f"Warning: Model '{gptok_model}' not found...")

        # For unknown models, this defaults to a more reasonable 8192
        self.gptok_limit = gptok_limit or OPENAI_MODELS.get(gptok_model, 8192)
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that the hardcoded fallback token limit of 2048 is suboptimal for the new graceful handling logic, significantly improving the utility of the feature for unknown models.

Medium
Possible issue
Ensure stderr is restored after test

To ensure sys.stderr is always restored, wrap the test logic that redirects it
within a try...finally block.

tests/test_splitter.py [23-33]

 # Redirect stderr to capture the warning message
 old_stderr = sys.stderr
 sys.stderr = captured_stderr = StringIO()
+try:
+    MarkdownLLMSplitter(gptok_model="claude-3-opus-20240229")
 
-MarkdownLLMSplitter(gptok_model="claude-3-opus-20240229")
+    warning_message = captured_stderr.getvalue()
+    assert "Warning: Model 'claude-3-opus-20240229' not found" in warning_message
+finally:
+    # Restore stderr
+    sys.stderr = old_stderr
 
-# Restore stderr
-sys.stderr = old_stderr
-
-warning_message = captured_stderr.getvalue()
-assert "Warning: Model 'claude-3-opus-20240229' not found" in warning_message
-
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a potential bug in the test where sys.stderr is not guaranteed to be restored, which could cause subsequent tests to fail or behave unexpectedly. Using try...finally is the correct pattern to ensure resource cleanup.

Low
  • Update

- Adds `gpt-5` and `gpt-4.1` to the known models list with their correct token limits.
- Updates tests to verify the new models.
- Improves handling of unknown models by catching the `KeyError` from `tiktoken` and falling back to a default encoding, which prevents crashes and increases robustness.
@5m2wse50
Copy link
Author

I've updated this PR with more current information. The model list now includes gpt-5 and gpt-4.1 and is more robust against unknown models. Thanks for the feedback.

- Corrects the token limit for `gpt-5` to 400k.
- Adds `gpt-5-mini` and `gpt-5-nano` with a 400k token limit.
- Updates tests to reflect the new, accurate model information.
@5m2wse50
Copy link
Author

You were right to question the model data. After a much more thorough investigation, I've pushed a new commit to this PR with the correct token limits for the gpt-5 family (400k) and gpt-4.1 (1M), and updated the tests accordingly. This version is now accurate based on the latest information. My apologies for the previous inaccuracies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant