Skip to content

Conversation

codeflash-ai[bot]
Copy link

@codeflash-ai codeflash-ai bot commented Oct 22, 2025

📄 231% (2.31x) speedup for mask_tokens_randomly in blanc/utils.py

⏱️ Runtime : 13.6 milliseconds 4.12 milliseconds (best of 287 runs)

📝 Explanation and details

The optimized code achieves a 231% speedup through several key algorithmic improvements:

1. Precomputed Next Tokens List

  • Replaces repeated tokens[idx + 1] indexing with a single next_tokens = tokens[1:] + [''] precomputation
  • Eliminates the conditional '' if idx + 1 == len(tokens) else tokens[idx + 1] in every loop iteration
  • Uses zip(tokens, next_tokens) for efficient paired iteration

2. List Comprehension for Token Position Selection

  • Replaces manual loop with append operations with a single list comprehension
  • Reduces Python bytecode overhead and leverages C-level optimizations in CPython

3. Optimized Loop Structure

  • Eliminates the while len(token_positions) > 0 loop that repeatedly mutated and copied the list
  • Uses range(0, position_count, n_mask) with slicing, avoiding expensive list resizing operations
  • Each iteration processes a fixed slice rather than modifying the original list

4. Set-Based Membership Testing

  • Converts positions_to_mask from list to set, changing idx in positions_to_mask from O(n) to O(1)
  • Critical for performance when checking membership across all tokens

5. Comprehensions Over Manual Loops

  • Replaces nested loops with list/dict comprehensions for building inputs and answers
  • Reduces Python interpreter overhead and function call costs

Performance Benefits by Test Case:

  • Large scale tests (1000+ tokens) see the biggest gains: 244-423% speedup due to reduced O(n²) operations
  • Small tests show modest 13-37% overhead due to setup costs, but still maintain correctness
  • Edge cases with no eligible tokens benefit from early return optimization (19% faster)

The optimizations are most effective for larger token sequences where the O(1) set operations and reduced list mutations provide substantial savings.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 36 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests and Runtime
import random  # used for deterministic testing with seeds
from copy import deepcopy

# imports
import pytest  # used for our unit tests
from blanc.utils import mask_tokens_randomly

# unit tests

# --- BASIC TEST CASES ---

def test_basic_single_token_masking():
    """Test masking a single token sequence with p_mask=0.5"""
    random.seed(42)
    tokens = ["hello", "world"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 6.48μs -> 8.93μs (27.4% slower)
    for i in range(2):
        # The answer dict key is the index of the masked token
        masked_idx = masked_inputs[i].index(mask_token)

def test_basic_masking_with_wordpiece_tokens():
    """Test masking with wordpiece tokens and min_token_lengths"""
    random.seed(1)
    tokens = ["un", "##believ", "##able", "results"]
    min_token_lengths = (2, 3, 2)  # normal, lead, followup
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.92μs -> 10.2μs (22.4% slower)
    # Only eligible tokens are masked: "un" (len=2, next is "##believ", so min_lead=3, not eligible)
    # "##believ" (len=7-2=5, min_followup=2, eligible)
    # "##able" (len=6-2=4, min_followup=2, eligible)
    # "results" (len=7, normal, min_normal=2, eligible)
    # So eligible indices: 1,2,3
    eligible_indices = [1,2,3]
    for i in range(2):
        # Only eligible indices are masked
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == mask_token]
        for idx in masked_idxs:
            pass
        # The answer dict maps the masked indices to the original tokens
        for idx in masked_idxs:
            pass

def test_basic_masking_with_min_token_lengths():
    """Test min_token_lengths: tokens not meeting length are not masked"""
    random.seed(123)
    tokens = ["a", "##b", "##cd", "efg"]
    min_token_lengths = (2, 2, 2)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 6.78μs -> 8.12μs (16.5% slower)
    # "a": len=1, next is "##b", min_lead=2, not eligible
    # "##b": len=2-2=0, min_followup=2, not eligible
    # "##cd": len=4-2=2, min_followup=2, eligible
    # "efg": len=3, normal, min_normal=2, eligible
    eligible_indices = [2,3]
    for i in range(len(masked_inputs)):
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == mask_token]
        for idx in masked_idxs:
            pass

def test_basic_masking_with_p_mask_1():
    """Test p_mask=1 masks all eligible tokens at once"""
    random.seed(7)
    tokens = ["token1", "token2", "token3"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 6.74μs -> 8.67μs (22.3% slower)

def test_basic_masking_with_p_mask_0():
    """Test p_mask=0 still masks at least one eligible token"""
    random.seed(8)
    tokens = ["token1", "token2", "token3"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.53μs -> 10.1μs (25.4% slower)
    for i in range(3):
        masked_idx = masked_inputs[i].index(mask_token)

# --- EDGE TEST CASES ---

def test_edge_empty_tokens():
    """Test with empty tokens list"""
    random.seed(100)
    tokens = []
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 2.28μs -> 3.18μs (28.2% slower)

def test_edge_no_eligible_tokens():
    """Test when no tokens are eligible for masking"""
    random.seed(101)
    tokens = ["a", "b", "c"]
    min_token_lengths = (10, 10, 10)  # All tokens too short
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 3.65μs -> 4.44μs (17.8% slower)

def test_edge_all_tokens_are_wordpieces():
    """Test when all tokens are wordpieces and eligible"""
    random.seed(102)
    tokens = ["##ab", "##cd", "##ef"]
    min_token_lengths = (1, 1, 2)
    mask_token = "[MASK]"
    p_mask = 0.5
    # All tokens are wordpieces, len=4-2=2, min_followup=2, all eligible
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 8.11μs -> 10.6μs (23.3% slower)
    for i in range(3):
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == mask_token]

def test_edge_mask_token_is_in_tokens():
    """Test when mask_token is already present in tokens"""
    random.seed(103)
    tokens = ["foo", "[MASK]", "bar"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.45μs -> 10.0μs (25.7% slower)
    for i in range(3):
        # The answer dict only maps the newly masked index, not the original "[MASK]"
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == mask_token]
        # Only one of these indices is in all_answers[i]
        answer_keys = list(all_answers[i].keys())
        # The answer value should match the original token at that index
        idx = answer_keys[0]

def test_edge_min_token_lengths_zero():
    """Test with min_token_lengths all zero"""
    random.seed(104)
    tokens = ["x", "##y", "z"]
    min_token_lengths = (0, 0, 0)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.79μs -> 10.2μs (23.9% slower)
    for i in range(3):
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == mask_token]

def test_edge_p_mask_greater_than_1():
    """Test p_mask > 1, should mask all eligible tokens at once"""
    random.seed(105)
    tokens = ["a", "b", "c", "d"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 1.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.20μs -> 8.97μs (19.8% slower)


def test_edge_mask_token_is_empty_string():
    """Test with mask_token as empty string"""
    random.seed(107)
    tokens = ["apple", "banana"]
    min_token_lengths = (1, 1, 1)
    mask_token = ""
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 8.93μs -> 12.1μs (26.0% slower)
    # Each output should have one empty string in place of the masked token
    for i in range(2):
        masked_idxs = [idx for idx, tok in enumerate(masked_inputs[i]) if tok == ""]

def test_edge_tokens_are_all_mask_token():
    """Test when all tokens are already mask_token"""
    random.seed(108)
    tokens = ["[MASK]", "[MASK]", "[MASK]"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 8.52μs -> 11.2μs (23.8% slower)
    # n_mask = 1, three outputs, but answers dict should map the index to "[MASK]"
    for i in range(3):
        answer_keys = list(all_answers[i].keys())
        idx = answer_keys[0]

# --- LARGE SCALE TEST CASES ---

def test_large_scale_many_tokens():
    """Test with a large number of tokens (1000)"""
    random.seed(200)
    tokens = ["tok{}".format(i) for i in range(1000)]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.05  # 5% of tokens per output
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 3.84ms -> 1.12ms (244% faster)
    n_mask = max(int(1000*0.05), 1)  # 50
    for i in range(20):
        # Each output has 50 masked tokens, except possibly the last one
        if i < 19:
            pass
        else:
            # Last output may have fewer
            remaining = 1000 - 19*50
        # All masked indices map to correct original tokens
        for idx in all_answers[i]:
            pass

def test_large_scale_wordpiece_tokens():
    """Test with 1000 wordpiece tokens, some eligible, some not"""
    random.seed(201)
    tokens = []
    for i in range(500):
        tokens.append("word{}".format(i))  # normal tokens
        tokens.append("##piece{}".format(i))  # wordpiece tokens
    min_token_lengths = (5, 7, 6)
    mask_token = "[MASK]"
    p_mask = 0.1
    # normal tokens: len("wordNNN") >= 5, eligible if next is not wordpiece
    # wordpiece tokens: len("##pieceNNN") = 7+len(str(i)), eligible if >=6
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 3.08ms -> 688μs (347% faster)
    # Count eligible tokens
    eligible_indices = []
    for idx, token in enumerate(tokens):
        next_token = '' if idx+1==len(tokens) else tokens[idx+1]
        try:
            if is_token_large_enough(token, next_token, min_token_lengths):
                eligible_indices.append(idx)
        except Exception:
            pass
    n_mask = max(int(len(tokens)*0.1), 1)
    # Should produce ceil(len(eligible_indices)/n_mask) outputs
    expected_outputs = (len(eligible_indices)+n_mask-1)//n_mask
    # All masked indices are eligible
    for mi, answers in zip(masked_inputs, all_answers):
        for idx in answers:
            pass

def test_large_scale_performance():
    """Test that large scale masking does not take excessive time or memory"""
    random.seed(202)
    tokens = ["t{}".format(i) for i in range(999)]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.33
    # Should produce ceil(999/329) = 4 outputs
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 2.92ms -> 558μs (423% faster)
    for i in range(4):
        # Each output has up to 329 masked tokens
        if i < 3:
            pass
        else:
            remaining = 999 - 3*329

def test_large_scale_randomness_and_determinism():
    """Test that random seed produces deterministic outputs"""
    tokens = ["t{}".format(i) for i in range(100)]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.1
    # Run twice with same seed, outputs should be identical
    random.seed(300)
    out1, ans1 = mask_tokens_randomly(deepcopy(tokens), min_token_lengths, mask_token, p_mask) # 111μs -> 89.4μs (25.0% faster)
    random.seed(300)
    out2, ans2 = mask_tokens_randomly(deepcopy(tokens), min_token_lengths, mask_token, p_mask) # 106μs -> 79.5μs (33.3% faster)

def test_large_scale_all_tokens_are_ineligible():
    """Test large scale with all tokens ineligible"""
    random.seed(301)
    tokens = ["a"]*999
    min_token_lengths = (10, 10, 10)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 183μs -> 153μs (19.8% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import random

# imports
import pytest
from blanc.utils import mask_tokens_randomly

# ----------- Basic Test Cases -----------

def test_basic_single_token():
    # Single token, should always mask the token if it's large enough
    tokens = ["hello"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 4.16μs -> 6.63μs (37.2% slower)

def test_basic_multiple_tokens_all_maskable():
    # All tokens are large enough, p_mask=0.5, so n_mask=2 (since max(int(4*0.5),1) = 2)
    tokens = ["hello", "world", "foo", "bar"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 8.45μs -> 10.9μs (22.3% slower)
    for inputs, answers in zip(masked_inputs, all_answers):
        # The answers map masked positions to the original tokens
        for idx in answers:
            pass

def test_basic_wordpiece_lead_and_followup():
    # Test lead and followup tokens with wordpiece prefix
    tokens = ["un", "##believable", "results"]
    min_token_lengths = (2, 2, 5)  # normal:2, lead:2, followup:5
    mask_token = "[MASK]"
    p_mask = 0.5
    # Only "##believable" is large enough as followup (len("believable")=10>=5)
    # "un" is lead (next token is wordpiece), len("un")=2>=2
    # "results" is normal, len("results")=7>=2
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.96μs -> 10.7μs (25.8% slower)
    for inputs, answers in zip(masked_inputs, all_answers):
        for idx in answers:
            pass

def test_basic_mask_token_not_in_tokens():
    # Mask token does not appear in the original tokens
    tokens = ["apple", "banana", "cherry"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.33
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.45μs -> 10.1μs (26.5% slower)
    for inputs in masked_inputs:
        pass

# ----------- Edge Test Cases -----------

def test_edge_empty_tokens():
    # Empty input list
    tokens = []
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 2.25μs -> 3.20μs (29.5% slower)

def test_edge_all_tokens_too_small():
    # All tokens are too small to be masked
    tokens = ["a", "b", "c"]
    min_token_lengths = (2, 2, 2)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 3.67μs -> 4.53μs (18.9% slower)

def test_edge_no_maskable_tokens_due_to_wordpiece():
    # Only the followup token is large enough, but it's not present
    tokens = ["##a", "##b"]
    min_token_lengths = (2, 2, 2)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 3.40μs -> 4.07μs (16.6% slower)

def test_edge_p_mask_zero():
    # p_mask=0, but n_mask should be at least 1
    tokens = ["token1", "token2", "token3"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.92μs -> 10.5μs (24.4% slower)
    for inputs, answers in zip(masked_inputs, all_answers):
        pass

def test_edge_min_token_lengths_different():
    # Test with different min_token_lengths for normal, lead, followup
    tokens = ["ab", "##cd", "efg", "##hij"]
    min_token_lengths = (2, 3, 2)
    mask_token = "[MASK]"
    p_mask = 0.5
    # "ab" is lead (next is wordpiece), len=2>=3? No
    # "##cd" is followup, len=2>=2? Yes
    # "efg" is lead (next is wordpiece), len=3>=3? Yes
    # "##hij" is followup, len=3>=2? Yes
    # So maskable: idx 1,2,3
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 7.58μs -> 10.0μs (24.4% slower)
    for inputs, answers in zip(masked_inputs, all_answers):
        # Only indices 1,2,3 should be masked
        for idx in answers:
            pass

def test_edge_mask_token_is_wordpiece_prefix():
    # Mask token itself is a wordpiece token
    tokens = ["token", "##piece"]
    min_token_lengths = (1, 1, 1)
    mask_token = "##MASK"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 5.89μs -> 7.85μs (24.9% slower)

def test_edge_min_token_lengths_zero():
    # Zero min_token_lengths should allow all tokens to be maskable
    tokens = ["a", "##b", "c"]
    min_token_lengths = (0, 0, 0)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 6.55μs -> 8.47μs (22.6% slower)

def test_edge_mask_token_same_as_input_token():
    # Mask token is same as one of the input tokens
    tokens = ["foo", "[MASK]", "bar"]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 6.29μs -> 8.07μs (22.1% slower)

# ----------- Large Scale Test Cases -----------

def test_large_scale_100_tokens():
    # 100 tokens, all maskable, p_mask=0.1, so n_mask=10
    tokens = [f"token{i}" for i in range(100)]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 0.1
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 111μs -> 89.3μs (25.3% faster)
    for inputs, answers in zip(masked_inputs, all_answers):
        for idx in answers:
            pass

def test_large_scale_999_tokens_varied_lengths():
    # 999 tokens, only those with even index are large enough
    tokens = [f"token{i}" if i%2==0 else "a" for i in range(999)]
    min_token_lengths = (6, 6, 6)
    mask_token = "[MASK]"
    p_mask = 0.05
    # Only even indices are maskable (len("tokenX")>=6)
    maskable_indices = [i for i in range(999) if i%2==0]
    n_mask = max(int(999*0.05),1)
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 2.01ms -> 660μs (204% faster)
    # Number of maskings: ceil(len(maskable_indices)/n_mask)
    expected_maskings = (len(maskable_indices) + n_mask - 1) // n_mask
    for inputs, answers in zip(masked_inputs, all_answers):
        for idx in answers:
            pass

def test_large_scale_wordpiece_mixed():
    # 500 tokens, half are wordpiece followups, half are normal
    tokens = ["token"]*250 + ["##piece"]*250
    min_token_lengths = (5, 5, 5)
    mask_token = "[MASK]"
    p_mask = 0.2
    # All tokens are maskable (len("token")=5, len("piece")=5)
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 911μs -> 301μs (202% faster)
    n_mask = max(int(500*0.2),1)
    # There should be ceil(500/n_mask) maskings
    expected_maskings = (500 + n_mask - 1)//n_mask
    for inputs, answers in zip(masked_inputs, all_answers):
        for idx in answers:
            pass

def test_large_scale_no_maskable_tokens():
    # 1000 tokens, none are large enough
    tokens = ["a"]*1000
    min_token_lengths = (2, 2, 2)
    mask_token = "[MASK]"
    p_mask = 0.5
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 182μs -> 153μs (19.2% faster)

def test_large_scale_all_masked_once():
    # 10 tokens, p_mask=1.0, so all masked in one go
    tokens = [f"token{i}" for i in range(10)]
    min_token_lengths = (1, 1, 1)
    mask_token = "[MASK]"
    p_mask = 1.0
    masked_inputs, all_answers = mask_tokens_randomly(tokens, min_token_lengths, mask_token, p_mask) # 10.9μs -> 12.6μs (13.8% slower)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-mask_tokens_randomly-mh2kp1kl and push.

Codeflash

The optimized code achieves a **231% speedup** through several key algorithmic improvements:

**1. Precomputed Next Tokens List**
- Replaces repeated `tokens[idx + 1]` indexing with a single `next_tokens = tokens[1:] + ['']` precomputation
- Eliminates the conditional `'' if idx + 1 == len(tokens) else tokens[idx + 1]` in every loop iteration
- Uses `zip(tokens, next_tokens)` for efficient paired iteration

**2. List Comprehension for Token Position Selection**
- Replaces manual loop with append operations with a single list comprehension
- Reduces Python bytecode overhead and leverages C-level optimizations in CPython

**3. Optimized Loop Structure**
- Eliminates the `while len(token_positions) > 0` loop that repeatedly mutated and copied the list
- Uses `range(0, position_count, n_mask)` with slicing, avoiding expensive list resizing operations
- Each iteration processes a fixed slice rather than modifying the original list

**4. Set-Based Membership Testing**
- Converts `positions_to_mask` from list to set, changing `idx in positions_to_mask` from O(n) to O(1)
- Critical for performance when checking membership across all tokens

**5. Comprehensions Over Manual Loops**
- Replaces nested loops with list/dict comprehensions for building `inputs` and `answers`
- Reduces Python interpreter overhead and function call costs

**Performance Benefits by Test Case:**
- **Large scale tests** (1000+ tokens) see the biggest gains: 244-423% speedup due to reduced O(n²) operations
- **Small tests** show modest 13-37% overhead due to setup costs, but still maintain correctness
- **Edge cases** with no eligible tokens benefit from early return optimization (19% faster)

The optimizations are most effective for larger token sequences where the O(1) set operations and reduced list mutations provide substantial savings.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 22, 2025 22:36
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Oct 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants