Skip to content

Conversation

codeflash-ai[bot]
Copy link

@codeflash-ai codeflash-ai bot commented Oct 2, 2025

📄 37% (0.37x) speedup for extract_sentrytrace_data in sentry_sdk/tracing_utils.py

⏱️ Runtime : 3.30 milliseconds 2.41 milliseconds (best of 124 runs)

📝 Explanation and details

The optimization adds length checks before expensive string formatting operations. Specifically:

Key Changes:

  • Added len(trace_id) != 32 check before "{:032x}".format(int(trace_id, 16))
  • Added len(parent_span_id) != 16 check before "{:016x}".format(int(parent_span_id, 16))

Why It's Faster:
The original code always performed string-to-int conversion and formatting, even when the trace_id/span_id were already properly formatted. The optimization skips these expensive operations when the strings are already the correct length (32 hex chars for trace_id, 16 for span_id).

The int(trace_id, 16) and "{:032x}".format() operations are computationally expensive, involving:

  • Hexadecimal string parsing
  • Integer conversion
  • String formatting with zero-padding

Performance Impact:
Test results show the optimization is most effective when trace IDs and span IDs are already properly formatted (which is common in production). Cases like test_valid_full_header show 51.6% speedup, and test_missing_trace_id shows 65.9% speedup. The optimization has minimal overhead for cases where formatting is still needed, with only small gains (1-7%) for malformed inputs.

This is particularly valuable for high-throughput tracing scenarios where most headers contain well-formatted trace data.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 21 Passed
🌀 Generated Regression Tests 1960 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
⚙️ Existing Unit Tests and Runtime
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
tracing/test_http_headers.py::test_sentrytrace_extraction 13.8μs 9.03μs 52.8%✅
🌀 Generated Regression Tests and Runtime
import re
from typing import Dict, Optional, Union

# imports
import pytest  # used for our unit tests
from sentry_sdk.tracing_utils import extract_sentrytrace_data

SENTRY_TRACE_REGEX = re.compile(
    "^[ \t]*"  # whitespace
    "([0-9a-f]{32})?"  # trace_id
    "-?([0-9a-f]{16})?"  # span_id
    "-?([01])?"  # sampled
    "[ \t]*$"  # whitespace
)
from sentry_sdk.tracing_utils import extract_sentrytrace_data

# unit tests

# ----------------------
# Basic Test Cases
# ----------------------

def test_valid_full_header():
    # Basic valid header: all fields present
    header = "0123456789abcdef0123456789abcdef-0123456789abcdef-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.43μs -> 4.90μs (51.6% faster)

def test_valid_full_header_sampled_false():
    # Basic valid header: sampled = 0
    header = "abcdefabcdefabcdefabcdefabcdefabcd-abcdefabcdefabcd-0"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.88μs -> 3.71μs (4.77% faster)

def test_valid_header_missing_sampled():
    # Valid header: missing sampled field
    header = "0123456789abcdef0123456789abcdef-0123456789abcdef"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.80μs -> 4.12μs (65.2% faster)

def test_valid_header_only_trace_id():
    # Valid header: only trace_id present
    header = "0123456789abcdef0123456789abcdef"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.17μs -> 3.62μs (42.7% faster)

def test_valid_header_with_whitespace():
    # Valid header with leading/trailing whitespace
    header = "  0123456789abcdef0123456789abcdef-0123456789abcdef-1  "
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.74μs -> 4.19μs (60.8% faster)

# ----------------------
# Edge Test Cases
# ----------------------

def test_empty_header():
    # Edge: empty string
    header = ""
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 459ns -> 492ns (6.71% slower)

def test_none_header():
    # Edge: None as input
    header = None
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 473ns -> 444ns (6.53% faster)

def test_invalid_trace_id_length():
    # Edge: trace_id too short
    header = "01234567-0123456789abcdef-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.00μs -> 2.83μs (6.00% faster)

def test_invalid_span_id_length():
    # Edge: span_id too short
    header = "0123456789abcdef0123456789abcdef-01234567-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.21μs -> 4.14μs (1.74% faster)

def test_invalid_sampled_value():
    # Edge: sampled value not 0 or 1
    header = "0123456789abcdef0123456789abcdef-0123456789abcdef-2"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.56μs -> 4.50μs (1.36% faster)

def test_invalid_characters_in_trace_id():
    # Edge: invalid characters in trace_id
    header = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz-0123456789abcdef-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.50μs -> 2.37μs (5.36% faster)

def test_invalid_characters_in_span_id():
    # Edge: invalid characters in span_id
    header = "0123456789abcdef0123456789abcdef-zzzzzzzzzzzzzzzz-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.03μs -> 3.82μs (5.69% faster)

def test_missing_trace_id():
    # Edge: missing trace_id, only span_id and sampled
    header = "-0123456789abcdef-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.44μs -> 3.88μs (65.9% faster)

def test_missing_span_id():
    # Edge: missing span_id, only trace_id and sampled
    header = "0123456789abcdef0123456789abcdef--1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.88μs -> 4.00μs (47.0% faster)

def test_header_with_extra_fields():
    # Edge: extra fields after sampled
    header = "0123456789abcdef0123456789abcdef-0123456789abcdef-1-extra"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.96μs -> 4.80μs (3.16% faster)

def test_header_with_internal_spaces():
    # Edge: spaces inside the trace_id/span_id
    header = "0123456789abcd ef0123456789abcdef-0123456789abcdef-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.74μs -> 2.54μs (7.95% faster)

def test_header_with_leading_and_trailing_dash():
    # Edge: leading and trailing dash
    header = "-0123456789abcdef0123456789abcdef-0123456789abcdef-1-"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.44μs -> 3.26μs (5.51% faster)

def test_header_with_only_sampled():
    # Edge: only sampled value
    header = "-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.50μs -> 3.44μs (1.98% faster)

def test_header_with_only_span_id():
    # Edge: only span_id, no trace_id
    header = "-0123456789abcdef"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.94μs -> 3.64μs (63.3% faster)

def test_header_with_leading_and_trailing_tabs():
    # Edge: tabs instead of spaces
    header = "\t0123456789abcdef0123456789abcdef-0123456789abcdef-1\t"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.15μs -> 4.24μs (68.6% faster)

def test_header_with_tracing_format():
    # Edge: header with "00-" prefix and "-00" suffix
    header = "00-0123456789abcdef0123456789abcdef-0123456789abcdef-1-00"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.12μs -> 4.80μs (48.4% faster)

def test_header_with_tracing_format_and_whitespace():
    # Edge: header with "00-" prefix, "-00" suffix, and whitespace
    header = " 00-0123456789abcdef0123456789abcdef-0123456789abcdef-1-00 "
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.12μs -> 3.04μs (2.97% faster)

def test_header_with_uppercase_hex():
    # Edge: uppercase hex letters in trace_id and span_id
    header = "ABCDEFABCDEFABCDEFABCDEFABCDEFAB-ABCDEFABCDEFABCD-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.43μs -> 2.27μs (7.33% faster)

def test_header_with_mixedcase_hex():
    # Edge: mixed-case hex letters in trace_id and span_id
    header = "aBcDeFaBcDeFaBcDeFaBcDeFaBcDeFaB-aBcDeFaBcDeFaBcD-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.41μs -> 2.34μs (3.21% faster)

def test_header_with_leading_and_trailing_spaces_and_tabs():
    # Edge: both spaces and tabs
    header = " \t0123456789abcdef0123456789abcdef-0123456789abcdef-1\t "
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.61μs -> 4.67μs (62.9% faster)

# ----------------------
# Large Scale Test Cases
# ----------------------

def test_many_valid_headers():
    # Large scale: test many valid headers
    for i in range(100):
        trace_id = f"{i:032x}"
        span_id = f"{i:016x}"
        sampled = str(i % 2)
        header = f"{trace_id}-{span_id}-{sampled}"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 224μs -> 132μs (69.3% faster)

def test_many_invalid_headers():
    # Large scale: test many invalid headers
    for i in range(100):
        # Too short trace_id
        header = f"{i:08x}-0123456789abcdef-1"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 81.9μs -> 75.1μs (9.08% faster)

        # Too short span_id
        header = f"0123456789abcdef0123456789abcdef-{i:08x}-1"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 146μs -> 138μs (6.19% faster)

        # Invalid sampled value
        header = f"0123456789abcdef0123456789abcdef-0123456789abcdef-{i+2}"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output

def test_large_header_with_valid_data():
    # Large scale: test a header with max valid values
    trace_id = "f" * 32
    span_id = "e" * 16
    sampled = "1"
    header = f"{trace_id}-{span_id}-{sampled}"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.04μs -> 4.33μs (62.6% faster)

def test_large_header_with_tracing_format():
    # Large scale: header with tracing format and large values
    trace_id = "a" * 32
    span_id = "b" * 16
    sampled = "0"
    header = f"00-{trace_id}-{span_id}-{sampled}-00"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.91μs -> 4.47μs (54.6% faster)

def test_bulk_headers_performance():
    # Large scale: test performance with 500 valid and 500 invalid headers
    valid_headers = [
        f"{i:032x}-{i:016x}-{i%2}" for i in range(500)
    ]
    invalid_headers = [
        f"{i:08x}-badspanid-{i%2}" for i in range(500)
    ]
    # Check all valid headers parse correctly
    for i, header in enumerate(valid_headers):
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 1.08ms -> 651μs (65.3% faster)
    # Check all invalid headers return None
    for header in invalid_headers:
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 398μs -> 368μs (8.23% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import re

# imports
import pytest  # used for our unit tests
from sentry_sdk.tracing_utils import extract_sentrytrace_data

SENTRY_TRACE_REGEX = re.compile(
    "^[ \t]*"  # whitespace
    "([0-9a-f]{32})?"  # trace_id
    "-?([0-9a-f]{16})?"  # span_id
    "-?([01])?"  # sampled
    "[ \t]*$"  # whitespace
)
from sentry_sdk.tracing_utils import extract_sentrytrace_data

# unit tests

# 1. Basic Test Cases

def test_basic_valid_full_header():
    # Test a header with all fields present and valid
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.56μs -> 4.35μs (50.8% faster)

def test_basic_valid_unsampled():
    # Test a header with sampled bit set to 0
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-0"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.58μs -> 4.20μs (56.8% faster)

def test_basic_valid_no_sampled():
    # Test a header with no sampled bit
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.89μs -> 3.96μs (48.7% faster)

def test_basic_valid_only_trace_id():
    # Test a header with only trace_id
    header = "4bf92f3577b34da6a3ce929d0e0e4736"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.14μs -> 3.48μs (47.8% faster)

def test_basic_valid_trace_and_sampled():
    # Test a header with trace_id and sampled bit
    header = "4bf92f3577b34da6a3ce929d0e0e4736--1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.47μs -> 3.81μs (43.7% faster)

def test_basic_valid_span_and_sampled():
    # Test a header with span_id and sampled bit, but no trace_id
    header = "-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.28μs -> 3.75μs (41.0% faster)

def test_basic_valid_only_span_id():
    # Test a header with only span_id
    header = "-00f067aa0ba902b7"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 5.24μs -> 3.53μs (48.6% faster)

def test_basic_valid_only_sampled():
    # Test a header with only sampled bit
    header = "--1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.15μs -> 3.19μs (1.25% slower)

def test_basic_valid_only_sampled_zero():
    # Test a header with only sampled bit set to zero
    header = "--0"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.17μs -> 2.94μs (7.93% faster)

def test_basic_valid_whitespace():
    # Test a header with leading and trailing whitespace
    header = "   4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-1   "
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.64μs -> 4.36μs (52.4% faster)

# 2. Edge Test Cases

def test_edge_empty_string():
    # Test empty string input
    header = ""
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 410ns -> 443ns (7.45% slower)

def test_edge_none_input():
    # Test None input
    header = None
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 466ns -> 449ns (3.79% faster)

def test_edge_invalid_characters():
    # Test header with invalid characters
    header = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.65μs -> 2.59μs (2.32% faster)

def test_edge_too_short_trace_id():
    # Test header with too short trace_id
    header = "4bf92f3577b34da6a3ce929d0e0e47-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.28μs -> 3.18μs (3.08% faster)

def test_edge_too_long_trace_id():
    # Test header with too long trace_id
    header = "4bf92f3577b34da6a3ce929d0e0e47361234-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.80μs -> 3.36μs (12.9% faster)

def test_edge_too_short_span_id():
    # Test header with too short span_id
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.25μs -> 3.81μs (11.4% faster)

def test_edge_too_long_span_id():
    # Test header with too long span_id
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b71234-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.75μs -> 4.50μs (5.40% faster)

def test_edge_invalid_sampled_bit():
    # Test header with invalid sampled bit
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-2"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.57μs -> 4.29μs (6.53% faster)

def test_edge_extra_fields():
    # Test header with extra fields
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-1-extra"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.68μs -> 4.17μs (12.1% faster)

def test_edge_only_dashes():
    # Test header with only dashes
    header = "---"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.81μs -> 2.59μs (8.34% faster)

def test_edge_dash_at_start_and_end():
    # Test header with dash at start and end, but valid in the middle
    header = "-4bf92f3577b34da6a3ce929d0e0e4736-"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.24μs -> 3.17μs (2.05% faster)

def test_edge_trace_id_uppercase():
    # Test header with uppercase hex letters (should be valid)
    header = "4BF92F3577B34DA6A3CE929D0E0E4736-00F067AA0BA902B7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.32μs -> 2.19μs (6.12% faster)

def test_edge_trace_id_leading_zeros():
    # Test header with trace_id having leading zeros
    header = "00000000000000000000000000000001-0000000000000001-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.51μs -> 4.51μs (66.5% faster)

def test_edge_trace_id_all_zeros():
    # Test header with trace_id all zeros
    header = "00000000000000000000000000000000-0000000000000000-0"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.10μs -> 4.02μs (51.7% faster)

def test_edge_trace_id_and_span_id_only():
    # Test header with trace_id and span_id, but sampled missing
    header = "4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.37μs -> 3.96μs (60.9% faster)

def test_edge_w3c_format():
    # Test header in W3C format with 00-...-00
    header = "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.86μs -> 4.41μs (55.6% faster)

def test_edge_w3c_format_sampled():
    # Test header in W3C format with sampled bit set to 1
    header = "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.83μs -> 2.79μs (1.65% faster)

def test_edge_leading_and_trailing_whitespace():
    # Test header with tabs and spaces
    header = "\t 4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-1 \t"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 7.16μs -> 4.45μs (60.8% faster)

def test_edge_invalid_dash_positions():
    # Test header with dashes in wrong positions
    header = "4bf92f3577b34da6a3ce929d0e0e4736--00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.27μs -> 4.16μs (2.50% faster)

def test_edge_numeric_input():
    # Test header with numeric input instead of hex
    header = "12345678901234567890123456789012-1234567890123456-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 6.87μs -> 4.18μs (64.3% faster)

def test_edge_non_hex_span_id():
    # Test header with non-hex span_id
    header = "4bf92f3577b34da6a3ce929d0e0e4736-zzzzzzzzzzzzzzzz-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 4.03μs -> 3.84μs (4.84% faster)

def test_edge_non_hex_trace_id():
    # Test header with non-hex trace_id
    header = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz-00f067aa0ba902b7-1"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 2.39μs -> 2.21μs (8.06% faster)

def test_edge_none_fields():
    # Test header with only dashes and no fields
    header = "--"
    codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 3.01μs -> 3.09μs (2.56% slower)

# 3. Large Scale Test Cases

def test_large_scale_valid_headers():
    # Test a large number of valid headers
    for i in range(100):
        trace_id = f"{i:032x}"
        span_id = f"{i:016x}"
        sampled = str(i % 2)
        header = f"{trace_id}-{span_id}-{sampled}"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 221μs -> 132μs (67.1% faster)

def test_large_scale_invalid_headers():
    # Test a large number of invalid headers (wrong length)
    for i in range(100):
        trace_id = f"{i:030x}"  # 30 chars instead of 32
        span_id = f"{i:014x}"   # 14 chars instead of 16
        sampled = "2"           # invalid sampled
        header = f"{trace_id}-{span_id}-{sampled}"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 109μs -> 98.9μs (10.6% faster)

def test_large_scale_mixed_headers():
    # Test a mix of valid and invalid headers
    for i in range(100):
        if i % 2 == 0:
            trace_id = f"{i:032x}"
            span_id = f"{i:016x}"
            sampled = str(i % 2)
            header = f"{trace_id}-{span_id}-{sampled}"
            codeflash_output = extract_sentrytrace_data(header); result = codeflash_output
        else:
            trace_id = f"{i:030x}"  # invalid length
            span_id = f"{i:014x}"   # invalid length
            sampled = "2"           # invalid sampled
            header = f"{trace_id}-{span_id}-{sampled}"
            codeflash_output = extract_sentrytrace_data(header); result = codeflash_output

def test_large_scale_whitespace_headers():
    # Test headers with lots of whitespace
    for i in range(100):
        trace_id = f"{i:032x}"
        span_id = f"{i:016x}"
        sampled = str(i % 2)
        header = f"   {trace_id}-{span_id}-{sampled}   "
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 223μs -> 136μs (64.3% faster)

def test_large_scale_w3c_format_headers():
    # Test W3C format headers with 00-...-00
    for i in range(100):
        trace_id = f"{i:032x}"
        span_id = f"{i:016x}"
        sampled = str(i % 2)
        header = f"00-{trace_id}-{span_id}-0{sampled}-00"
        codeflash_output = extract_sentrytrace_data(header); result = codeflash_output # 195μs -> 181μs (7.62% faster)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-extract_sentrytrace_data-mg9m9ul7 and push.

Codeflash

constantinius and others added 3 commits October 2, 2025 14:35
getsentry#4875)

### Description

We cannot directly intercept MCP Tool calls, as they are done remotely
by the LLM and not in the Agent itself. However, we see when such a tool
call took place, so we can emit a zero-length span with the tool call
specifics. It will start at the same time as the parent span.

Closes
https://linear.app/getsentry/issue/TET-1192/openai-agents-hosted-mcp-calls-cannot-be-wrapped-in-an-execute-tool

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Emit execute_tool spans for MCP tool calls detected in agent results,
with tool metadata, input/output (PII-gated), and error status.
> 
> - **Tracing/Spans (openai_agents)**:
> - Add `utils._create_mcp_execute_tool_spans` to emit
`OP.GEN_AI_EXECUTE_TOOL` spans for MCP tool calls (`McpCall`) found in
`result.output`.
> - Sets `GEN_AI_TOOL_TYPE=mcp`, `GEN_AI_TOOL_NAME`, propagates
input/output when PII allowed, and marks `SPANSTATUS.ERROR` on error.
> - Spans start at the parent span's start time (zero-length
representation of remote call).
> - Wire into `spans/ai_client.update_ai_client_span` to create these
tool spans after setting usage/input/output data.
>   - Update imports to include `SPANSTATUS` and `OP`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
96df8c1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ivana Kellyer <[email protected]>
Update our test matrix with new releases of integrated frameworks and
libraries.

## How it works
- Scan PyPI for all supported releases of all frameworks we have a
dedicated test suite for.
- Pick a representative sample of releases to run our test suite
against. We always test the latest and oldest supported version.
- Update
[tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini)
with the new releases.

## Action required
- If CI passes on this PR, it's safe to approve and merge. It means our
integrations can handle new versions of frameworks that got pulled in.
- If CI doesn't pass on this PR, this points to an incompatibility of
either our integration or our test setup with a new version of a
framework.
- Check what the failures look like and either fix them, or update the
[test
config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py)
and rerun
[scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh).
See
[scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md)
for what configuration options are available.

 _____________________

_🤖 This PR was automatically created using [a GitHub
action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
The optimization adds length checks before expensive string formatting operations. Specifically:

**Key Changes:**
- Added `len(trace_id) != 32` check before `"{:032x}".format(int(trace_id, 16))`
- Added `len(parent_span_id) != 16` check before `"{:016x}".format(int(parent_span_id, 16))`

**Why It's Faster:**
The original code always performed string-to-int conversion and formatting, even when the trace_id/span_id were already properly formatted. The optimization skips these expensive operations when the strings are already the correct length (32 hex chars for trace_id, 16 for span_id).

The `int(trace_id, 16)` and `"{:032x}".format()` operations are computationally expensive, involving:
- Hexadecimal string parsing
- Integer conversion 
- String formatting with zero-padding

**Performance Impact:**
Test results show the optimization is most effective when trace IDs and span IDs are already properly formatted (which is common in production). Cases like `test_valid_full_header` show 51.6% speedup, and `test_missing_trace_id` shows 65.9% speedup. The optimization has minimal overhead for cases where formatting is still needed, with only small gains (1-7%) for malformed inputs.

This is particularly valuable for high-throughput tracing scenarios where most headers contain well-formatted trace data.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 2, 2025 16:15
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Oct 2, 2025
sentrivana and others added 24 commits October 3, 2025 08:40
### Description
Even though we try to figure out the current release automatically if
it's not provided, it can still end up being `None`. If that's the case,
it won't be attached to logs. The `test_logs_attributes` test assumes
there always is a release, which is incorrect.

I opted for conditionally checking for `sentry.release` in the test
instead of removing the check altogether, even though the test itself is
supposed to test custom user provided attributes. The reason is that
there is no other generic logs test testing `sentry.release`.

#### Issues
Closes getsentry#4878

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
Adds tracing support to DramatiqIntegration getsentry#3454

---------

Co-authored-by: igorek <[email protected]>
Co-authored-by: Anton Pirker <[email protected]>
Co-authored-by: Ivana Kellyer <[email protected]>
Add a first implementation of the litellm integration, supporting
completion and embeddings

Closes
https://linear.app/getsentry/issue/PY-1828/add-agent-monitoring-support-for-litellm
Closes https://linear.app/getsentry/issue/TET-1218/litellm-testing

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduce `LiteLLMIntegration` that instruments LiteLLM
chat/embeddings calls with spans, token usage, optional prompt logging,
and exception capture.
> 
> - **Integrations**:
> - Add `sentry_sdk/integrations/litellm.py` with `LiteLLMIntegration`
registering LiteLLM `input/success/failure` callbacks.
> - Start spans for `chat`/`embeddings`, set `gen_ai.*` metadata
(provider/system, operation, model, params like `max_tokens`,
`temperature`, `top_p`, `stream`).
> - Record LiteLLM-specific fields: `api_base`, `api_version`,
`custom_llm_provider`.
> - Optionally capture request/response messages when `include_prompts`
and PII are enabled.
> - Track token usage from response `usage` and capture exceptions;
always finish spans.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1ecd559. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ivana Kellyer <[email protected]>
### Description
huggingface_hub has a release candidate out and our test suite doesn't
work with it.

Two changes necessary:
- 1.0 uses `httpx`, so our `responses` mocks don't work, we also need
`pytest_httpx`.
- With httpx we get additional `http.client` spans in the transaction,
while before we were assuming the transaction only contains exactly one
`gen_ai.*` span and nothing else.

#### Issues
Closes getsentry#4802

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
Update our test matrix with new releases of integrated frameworks and
libraries.

## How it works
- Scan PyPI for all supported releases of all frameworks we have a
dedicated test suite for.
- Pick a representative sample of releases to run our test suite
against. We always test the latest and oldest supported version.
- Update
[tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini)
with the new releases.

## Action required
- If CI passes on this PR, it's safe to approve and merge. It means our
integrations can handle new versions of frameworks that got pulled in.
- If CI doesn't pass on this PR, this points to an incompatibility of
either our integration or our test setup with a new version of a
framework.
- Check what the failures look like and either fix them, or update the
[test
config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py)
and rerun
[scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh).
See
[scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md)
for what configuration options are available.

 _____________________

_🤖 This PR was automatically created using [a GitHub
action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ivana Kellyer <[email protected]>
Update our test matrix with new releases of integrated frameworks and
libraries.

## How it works
- Scan PyPI for all supported releases of all frameworks we have a
dedicated test suite for.
- Pick a representative sample of releases to run our test suite
against. We always test the latest and oldest supported version.
- Update
[tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini)
with the new releases.

## Action required
- If CI passes on this PR, it's safe to approve and merge. It means our
integrations can handle new versions of frameworks that got pulled in.
- If CI doesn't pass on this PR, this points to an incompatibility of
either our integration or our test setup with a new version of a
framework.
- Check what the failures look like and either fix them, or update the
[test
config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py)
and rerun
[scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh).
See
[scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md)
for what configuration options are available.

 _____________________

_🤖 This PR was automatically created using [a GitHub
action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ivana Kellyer <[email protected]>
…ry#4883)

Prevent mutating cookies on incoming HTTP requests if the cookie name is
in the scrubbers denylist.

Cookies like `token=...` were replaced with `AnnotatedValue` because a
shallow reference of the request information was held by the client. A
deep copy is introduced so scrubbing does not interfere with Litestar,
and in particular does not break `JWTCookieAuth`.

Closes getsentry#4882

---------

Co-authored-by: Ivana Kellyer <[email protected]>
### Description

Removing the Check CI Config step altogether as well as associated parts
of the toxgen script (`fail_on_changes`). Added a BIG ALL CAPS WARNING
to `tox.ini` instead. Also updated the toxgen readme a bit.

Removing the check should be fine because we haven't actually seen cases
of people trying to edit `tox.ini` directly -- if this happens in the
future it's easy to notice in the PR. If we don't notice it then, we can
notice it during the weekly toxgen update. And if don't notice it then,
the file simply gets overwritten. 🤷🏻‍♀️


### The Problem With Checking `tox.ini`: The Long Read

In order to check manual changes to `tox.ini` on a PR, we hash the
committed file, then run toxgen, hash the result, and compare. If the
hashes differ, we fail the check. This works fine as long as there have
been no new releases between the two points in time when `tox.ini` was
last committed and when we ran the check.

This is usually not the case. There are new releases all the time. When
we then rerun toxgen, the resulting `tox.ini` is different from the
committed one because it contains the new releases. So the hashes are
different without any manual changes to the file.

One solution to this is always saving the timestamp of the last time
`tox.ini` was generated, and then when rerunning toxgen for the purposes
of the check, ignoring all new releases past the timestamp. This means
any changes we detect were actually made by the user.

However, the explicit timestamp is prone to merge conflicts. Anytime
`master` has had a toxgen update, and a PR is made that also ran toxgen,
the PR will have a merge conflict on the timestamp field that needs to
be sorted out manually. This is annoying and unnecessary.

(An attempt was made to use an implicit timestamp instead in the form of
the commit timestamp, but this doesn't work since we squash commits on
master, so the timestamp of the last commit that touched `tox.ini` is
actually much later than the change was made. There are also other
problems, like someone running toxgen but committing the change much
later, etc.)

### Solutions considered
- using a custom merge driver to resolve the timestamp conflict
automatically (doesn't work on GH PRs)
- running toxgen in CI on each PR and committing the change (would work
but we're essentially already doing this with the cron job every week)
- not checking in `tox.ini` at all, but running toxgen on each PR
(introduces new package releases unrelated to the PR, no test setup
committed -- contributors and package index maintainers also need to run
our tests)
- finding a different commit to use as the implicit timestamp (doesn't
work because we squash commits on `master`)
- ...

In the end I decided to just get rid of the check. If people modifying
`tox.ini` manually becomes a problem, we can deal with it then. I've
added a big warning to `tox.ini` to discourage this.



#### Issues
Closes getsentry#4886

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
- Add a constant that contains the allowed message roles according to
OTEL and a mapping
- Apply that mapping to all gen_ai integrations
- We will track input roles that do not conform to expectations via a
Sentry issue in agent monitoring to make sure we continually update the
mappings
---------

Co-authored-by: Ivana Kellyer <[email protected]>
…y#4770)

Automatically fork isolation and current scopes when running tasks with
`concurrent.future`. Packages the implementation from
getsentry#4508 (comment)
as an integration.

Closes getsentry#4565

---------

Co-authored-by: Anton Pirker <[email protected]>
### Description
Python 3.14 is out, let's use it for linting.

#### Issues
Ref getsentry#4895

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
### Description
Remove old metrics code to make way for
getsentry#4898

Metrics was always an experimental feature and Sentry stopped accepting
metrics a year ago.

#### Issues
<!--
* resolves: getsentry#1234
* resolves: LIN-1234
-->

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
### Description
Logs are not experimental anymore, but one of the internal log-related
functions still had "experimental" in the name.

#### Issues
<!--
* resolves: getsentry#1234
* resolves: LIN-1234
-->

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
…y#4898)

### Summary
Similar to getsentry/sentry-javascript#17883,
this allows the py sdk to send in new trace metric protocol items,
although this code is experimental since the schema may still change.
Most of this code has been copied from logs (eg. log batcher -> metrics
batcher) in order to dogfood, once we're more sure of our approach we
can refactor.

Closes LOGS-367

---------

Co-authored-by: Ivana Kellyer <[email protected]>
Adds support for `python-genai` integrations. It supports both sync and
async clients, and both regular and streaming modes for interacting with
models and building agents.

Closes [PY-1733: Add agent monitoring support for
`google-genai`](https://linear.app/getsentry/issue/PY-1733/add-agent-monitoring-support-for-google-genai)
…etsentry#4858)

### Description

Without "@functools.wraps" added, Ray exposes Prometheus metrics with
all tasks named "new_func"

#### Issues

* Follow up to
[!4430](getsentry#4430) comments

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
### Description
Updating tox + reorg the AI group alphabetically.

New openai release doesn't work on 3.8, explicitly testing on 3.9+ from
there

Doing this now to unblock
getsentry#4906 (comment)

#### Issues
<!--
* resolves: getsentry#1234
* resolves: LIN-1234
-->

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
alexander-alderman-webb and others added 13 commits October 10, 2025 13:49
…ry#4902)

Add code source attributes to outgoing HTTP requests as described in
getsentry/sentry-docs#15161. The attributes are only added if the time to receive a response to an HTTP request exceeds a configurable threshold value.

Factors out functionality from SQL query source and tests that it works
in the HTTP request setting.

Closes getsentry#4881
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.30.8</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.8 - 10 Oct 2025</h2>
<p>No user facing changes.</p>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.8/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.7</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.7 - 06 Oct 2025</h2>
<p>No user facing changes.</p>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.7/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.6</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.6 - 02 Oct 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.2. <a
href="https://redirect.github.com/github/codeql-action/pull/3168">#3168</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.6/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.5</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.5 - 26 Sep 2025</h2>
<ul>
<li>We fixed a bug that was introduced in <code>3.30.4</code> with
<code>upload-sarif</code> which resulted in files without a
<code>.sarif</code> extension not getting uploaded. <a
href="https://redirect.github.com/github/codeql-action/pull/3160">#3160</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.30.5/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.30.4</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.30.4 - 25 Sep 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>3.29.4 - 23 Jul 2025</h2>
<p>No user facing changes.</p>
<h2>3.29.3 - 21 Jul 2025</h2>
<p>No user facing changes.</p>
<h2>3.29.2 - 30 Jun 2025</h2>
<ul>
<li>Experimental: When the <code>quality-queries</code> input for the
<code>init</code> action is provided with an argument, separate
<code>.quality.sarif</code> files are produced and uploaded for each
language with the results of the specified queries. Do not use this in
production as it is part of an internal experiment and subject to change
at any time. <a
href="https://redirect.github.com/github/codeql-action/pull/2935">#2935</a></li>
</ul>
<h2>3.29.1 - 27 Jun 2025</h2>
<ul>
<li>Fix bug in PR analysis where user-provided <code>include</code>
query filter fails to exclude non-included queries. <a
href="https://redirect.github.com/github/codeql-action/pull/2938">#2938</a></li>
<li>Update default CodeQL bundle version to 2.22.1. <a
href="https://redirect.github.com/github/codeql-action/pull/2950">#2950</a></li>
</ul>
<h2>3.29.0 - 11 Jun 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.22.0. <a
href="https://redirect.github.com/github/codeql-action/pull/2925">#2925</a></li>
<li>Bump minimum CodeQL bundle version to 2.16.6. <a
href="https://redirect.github.com/github/codeql-action/pull/2912">#2912</a></li>
</ul>
<h2>3.28.21 - 28 July 2025</h2>
<p>No user facing changes.</p>
<h2>3.28.20 - 21 July 2025</h2>
<ul>
<li>Remove support for combining SARIF files from a single upload for
GHES 3.18, see <a
href="https://github.blog/changelog/2024-05-06-code-scanning-will-stop-combining-runs-from-a-single-upload/">the
changelog post</a>. <a
href="https://redirect.github.com/github/codeql-action/pull/2959">#2959</a></li>
</ul>
<h2>3.28.19 - 03 Jun 2025</h2>
<ul>
<li>The CodeQL Action no longer includes its own copy of the extractor
for the <code>actions</code> language, which is currently in public
preview.
The <code>actions</code> extractor has been included in the CodeQL CLI
since v2.20.6. If your workflow has enabled the <code>actions</code>
language <em>and</em> you have pinned
your <code>tools:</code> property to a specific version of the CodeQL
CLI earlier than v2.20.6, you will need to update to at least CodeQL
v2.20.6 or disable
<code>actions</code> analysis.</li>
<li>Update default CodeQL bundle version to 2.21.4. <a
href="https://redirect.github.com/github/codeql-action/pull/2910">#2910</a></li>
</ul>
<h2>3.28.18 - 16 May 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.21.3. <a
href="https://redirect.github.com/github/codeql-action/pull/2893">#2893</a></li>
<li>Skip validating SARIF produced by CodeQL for improved performance.
<a
href="https://redirect.github.com/github/codeql-action/pull/2894">#2894</a></li>
<li>The number of threads and amount of RAM used by CodeQL can now be
set via the <code>CODEQL_THREADS</code> and <code>CODEQL_RAM</code>
runner environment variables. If set, these environment variables
override the <code>threads</code> and <code>ram</code> inputs
respectively. <a
href="https://redirect.github.com/github/codeql-action/pull/2891">#2891</a></li>
</ul>
<h2>3.28.17 - 02 May 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.21.2. <a
href="https://redirect.github.com/github/codeql-action/pull/2872">#2872</a></li>
</ul>
<h2>3.28.16 - 23 Apr 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/github/codeql-action/commit/a841c540b73bac7685691a2f930006ba52db3645"><code>a841c54</code></a>
Scratch <code>uploadSpecifiedFiles</code> tests, make
<code>uploadPayload</code> tests instead</li>
<li><a
href="https://github.com/github/codeql-action/commit/aeb12f6eaaa7419b7170f27dc3e2b5710203ff2d"><code>aeb12f6</code></a>
Merge branch 'main' into redsun82/skip-sarif-upload-tests</li>
<li><a
href="https://github.com/github/codeql-action/commit/6fd4ceb7bbb8ec2746fd4d3a64b77787dffd9afc"><code>6fd4ceb</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3189">#3189</a>
from github/henrymercer/download-codeql-rate-limit</li>
<li><a
href="https://github.com/github/codeql-action/commit/196a3e577b477ffb129cb35c7ed3ba72e6e2dbe7"><code>196a3e5</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3188">#3188</a>
from github/mbg/telemetry/partial-config</li>
<li><a
href="https://github.com/github/codeql-action/commit/98abb870dcd6421594724ae220643e13baf90298"><code>98abb87</code></a>
Add configuration error for rate limited CodeQL download</li>
<li><a
href="https://github.com/github/codeql-action/commit/bdd2cdf891a0a89c6680bd54c9ba63c80e440f75"><code>bdd2cdf</code></a>
Also include <code>language</code> in error status report for
<code>start-proxy</code>, if available</li>
<li><a
href="https://github.com/github/codeql-action/commit/fb148789ab863424b005147b4b018fe5691e5ccc"><code>fb14878</code></a>
Include <code>languages</code> in <code>start-proxy</code>
telemetry</li>
<li><a
href="https://github.com/github/codeql-action/commit/2ff418f28a66dd71cd80701e95ec26db12875f15"><code>2ff418f</code></a>
Parse <code>language</code> before calling
<code>getCredentials</code></li>
<li>See full diff in <a
href="https://github.com/github/codeql-action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Update our test matrix with new releases of integrated frameworks and
libraries.

## How it works
- Scan PyPI for all supported releases of all frameworks we have a
dedicated test suite for.
- Pick a representative sample of releases to run our test suite
against. We always test the latest and oldest supported version.
- Update
[tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini)
with the new releases.

## Action required
- If CI passes on this PR, it's safe to approve and merge. It means our
integrations can handle new versions of frameworks that got pulled in.
- If CI doesn't pass on this PR, this points to an incompatibility of
either our integration or our test setup with a new version of a
framework.
- Check what the failures look like and either fix them, or update the
[test
config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py)
and rerun
[scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh).
See
[scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md)
for what configuration options are available.

 _____________________

_🤖 This PR was automatically created using [a GitHub
action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ivana Kellyer <[email protected]>
Hi all, I am building Codeflash.ai which is an automated performance
optimizer for Python codebases. I tried optimizing sentry and found a
bunch of great optimizations that I would like to contribute. Would love
to collaborate with your team to get them reviewed and merged. Let me
know what's the best way to get in touch.

<!-- CODEFLASH_OPTIMIZATION:
{"function":"_get_db_span_description","file":"sentry_sdk/integrations/redis/modules/queries.py","speedup_pct":"44%","speedup_x":"0.44x","original_runtime":"586
microseconds","best_runtime":"408
microseconds","optimization_type":"loop","timestamp":"2025-10-02T20:47:52.016Z","version":"1.0"}
-->
#### 📄 44% (0.44x) speedup for ***`_get_db_span_description` in
`sentry_sdk/integrations/redis/modules/queries.py`***

⏱️ Runtime : **`586 microseconds`** **→** **`408 microseconds`** (best
of `269` runs)

#### 📝 Explanation and details


The optimization achieves a **43% speedup** by eliminating redundant
function calls inside the loop in `_get_safe_command()`.

**Key optimizations applied:**

1. **Cached `should_send_default_pii()` call**: The original code called
this function inside the loop for every non-key argument (up to 146
times in profiling). The optimized version calls it once before the loop
and stores the result in `send_default_pii`, reducing expensive function
calls from O(n) to O(1).

2. **Pre-computed `name.lower()`**: The original code computed
`name.lower()` inside the loop for every argument (204 times in
profiling). The optimized version computes it once before the loop and
reuses the `name_low` variable.

**Performance impact from profiling:**
- The `should_send_default_pii()` calls dropped from 1.40ms (65.2% of
total time) to 625μs (45.9% of total time)
- The `name.lower()` calls were eliminated from the loop entirely,
removing 99ms of redundant computation
- Overall `_get_safe_command` execution time improved from 2.14ms to
1.36ms (36% faster)

**Test case patterns where this optimization excels:**
- **Multiple arguments**: Commands with many arguments see dramatic
improvements (up to 262% faster for large arg lists)
- **Large-scale operations**: Tests with 1000+ arguments show 171-223%
speedups
- **Frequent Redis commands**: Any command processing multiple values
benefits significantly

The optimization is most effective when processing Redis commands with
multiple arguments, which is common in batch operations and complex data
manipulations.



✅ **Correctness verification report:**

| Test                        | Status            |
| --------------------------- | ----------------- |
| ⚙️ Existing Unit Tests | 🔘 **None Found** |
| 🌀 Generated Regression Tests | ✅ **48 Passed** |
| ⏪ Replay Tests | 🔘 **None Found** |
| 🔎 Concolic Coverage Tests | 🔘 **None Found** |
|📊 Tests Coverage       | 100.0% |
<details>
<summary>🌀 Generated Regression Tests and Runtime</summary>

```python
import pytest
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description

_MAX_NUM_ARGS = 10

# Dummy RedisIntegration class for testing
class RedisIntegration:
    def __init__(self, max_data_size=None):
        self.max_data_size = max_data_size

# Dummy should_send_default_pii function for testing
_send_pii = False
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description

# --- Basic Test Cases ---

def test_basic_no_args():
    """Test command with no arguments."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); desc = codeflash_output # 2.55μs -> 7.76μs (67.2% slower)

def test_basic_single_arg_pii_false():
    """Test command with one argument, PII off."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); desc = codeflash_output # 3.62μs -> 7.86μs (54.0% slower)

def test_basic_single_arg_pii_true():
    """Test command with one argument, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); desc = codeflash_output # 3.28μs -> 7.40μs (55.7% slower)

def test_basic_multiple_args_pii_false():
    """Test command with multiple args, PII off."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey", "value1", "value2")); desc = codeflash_output # 12.6μs -> 8.24μs (52.8% faster)

def test_basic_multiple_args_pii_true():
    """Test command with multiple args, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey", "value1", "value2")); desc = codeflash_output # 9.92μs -> 8.47μs (17.0% faster)

def test_basic_sensitive_command():
    """Test sensitive command: should always filter after command name."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "secret")); desc = codeflash_output # 7.96μs -> 7.56μs (5.33% faster)

def test_basic_sensitive_command_case_insensitive():
    """Test sensitive command with different casing."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "set", ("mykey", "secret")); desc = codeflash_output # 7.77μs -> 7.84μs (0.881% slower)

def test_basic_max_num_args():
    """Test that args beyond _MAX_NUM_ARGS are ignored."""
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(_MAX_NUM_ARGS + 2))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.0μs -> 9.43μs (197% faster)
    # Only up to _MAX_NUM_ARGS+1 args are processed (the first arg is key)
    expected = "GET 'arg0'" + " [Filtered]" * _MAX_NUM_ARGS

# --- Edge Test Cases ---

def test_edge_empty_command_name():
    """Test with empty command name."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "", ("key",)); desc = codeflash_output # 3.22μs -> 7.46μs (56.9% slower)

def test_edge_empty_args():
    """Test with empty args tuple."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "DEL", ()); desc = codeflash_output # 2.09μs -> 6.73μs (69.0% slower)

def test_edge_none_arg():
    """Test with None argument."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (None,)); desc = codeflash_output # 3.37μs -> 7.57μs (55.5% slower)

def test_edge_mixed_types_args():
    """Test with mixed argument types."""
    integration = RedisIntegration()
    args = ("key", 123, 45.6, True, None, ["a", "b"], {"x": 1})
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 19.9μs -> 8.46μs (136% faster)

def test_edge_sensitive_command_with_pii_true():
    """Sensitive commands should always filter, even if PII is on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AUTH", ("user", "pass")); desc = codeflash_output # 3.40μs -> 7.50μs (54.7% slower)

def test_edge_max_data_size_truncation():
    """Test truncation when description exceeds max_data_size."""
    integration = RedisIntegration(max_data_size=15)
    codeflash_output = _get_db_span_description(integration, "GET", ("verylongkeyname", "value")); desc = codeflash_output # 9.20μs -> 8.72μs (5.57% faster)
    # "GET 'verylongkeyname' [Filtered]" is longer than 15
    # Truncate to 15-len("...") = 12, then add "..."
    expected = "GET 'verylo..."

def test_edge_max_data_size_exact_length():
    """Test truncation when description is exactly max_data_size."""
    integration = RedisIntegration(max_data_size=23)
    codeflash_output = _get_db_span_description(integration, "GET", ("shortkey",)); desc = codeflash_output # 3.33μs -> 7.63μs (56.4% slower)

def test_edge_max_data_size_less_than_ellipsis():
    """Test when max_data_size is less than length of ellipsis."""
    integration = RedisIntegration(max_data_size=2)
    codeflash_output = _get_db_span_description(integration, "GET", ("key",)); desc = codeflash_output # 4.07μs -> 8.65μs (52.9% slower)

def test_edge_args_are_empty_strings():
    """Test when args are empty strings."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("", "")); desc = codeflash_output # 8.52μs -> 7.74μs (10.1% faster)

def test_edge_command_name_is_space():
    """Test when command name is a space."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, " ", ("key",)); desc = codeflash_output # 3.09μs -> 7.34μs (57.9% slower)

# --- Large Scale Test Cases ---

def test_large_many_args_pii_false():
    """Test with a large number of arguments, PII off."""
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(1000))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 32.3μs -> 10.3μs (213% faster)
    # Only first arg shown, rest are filtered, up to _MAX_NUM_ARGS
    expected = "GET 'arg0'" + " [Filtered]" * min(len(args)-1, _MAX_NUM_ARGS)

def test_large_many_args_pii_true():
    """Test with a large number of arguments, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(1000))
    # Only up to _MAX_NUM_ARGS are processed
    expected = "GET " + " ".join([repr(f"arg{i}") for i in range(_MAX_NUM_ARGS+1)])
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.1μs -> 9.55μs (194% faster)

def test_large_long_command_name_and_args():
    """Test with very long command name and args."""
    integration = RedisIntegration()
    cmd = "LONGCOMMAND" * 10
    args = tuple("X"*100 for _ in range(_MAX_NUM_ARGS+1))
    expected = cmd + " " + " ".join([repr("X"*100) if i == 0 else "[Filtered]" for i in range(_MAX_NUM_ARGS+1)])
    codeflash_output = _get_db_span_description(integration, cmd, args); desc = codeflash_output # 34.2μs -> 9.45μs (262% faster)

def test_large_truncation():
    """Test truncation with very large description."""
    integration = RedisIntegration(max_data_size=50)
    args = tuple("X"*20 for _ in range(_MAX_NUM_ARGS+1))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.3μs -> 10.0μs (182% faster)

def test_large_sensitive_command():
    """Test large sensitive command, all args filtered."""
    integration = RedisIntegration()
    args = tuple(f"secret{i}" for i in range(1000))
    codeflash_output = _get_db_span_description(integration, "SET", args); desc = codeflash_output # 28.0μs -> 10.1μs (178% faster)
    # Only up to _MAX_NUM_ARGS+1 args are processed, all filtered
    expected = "SET" + " [Filtered]" * (_MAX_NUM_ARGS+1)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import pytest  # used for our unit tests
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description

_MAX_NUM_ARGS = 10

# Minimal RedisIntegration stub for testing
class RedisIntegration:
    def __init__(self, max_data_size=None):
        self.max_data_size = max_data_size

# Minimal Scope and client stub for should_send_default_pii
class ClientStub:
    def __init__(self, send_pii):
        self._send_pii = send_pii
    def should_send_default_pii(self):
        return self._send_pii

class Scope:
    _client = ClientStub(send_pii=False)
    @classmethod
    def get_client(cls):
        return cls._client

def should_send_default_pii():
    return Scope.get_client().should_send_default_pii()
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description

# --- Begin: Unit Tests ---

# 1. Basic Test Cases

def test_basic_single_arg_no_pii():
    # Test a simple command with one argument, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 3.46μs -> 7.84μs (55.9% slower)

def test_basic_multiple_args_no_pii():
    # Test a command with multiple arguments, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 8.35μs -> 8.05μs (3.70% faster)

def test_basic_multiple_args_with_pii():
    # Test a command with multiple arguments, PII enabled
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 7.97μs -> 7.63μs (4.39% faster)

def test_basic_sensitive_command():
    # Test a sensitive command, should always be filtered
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AUTH", ("user", "password")); result = codeflash_output # 3.40μs -> 7.46μs (54.4% slower)

def test_basic_no_args():
    # Test a command with no arguments
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); result = codeflash_output # 2.16μs -> 6.63μs (67.4% slower)

# 2. Edge Test Cases

def test_edge_max_num_args():
    # Test with more than _MAX_NUM_ARGS arguments, should truncate at _MAX_NUM_ARGS
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(_MAX_NUM_ARGS + 2))
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 32.4μs -> 9.05μs (258% faster)
    # Only up to _MAX_NUM_ARGS should be included
    expected = "SET " + " ".join(
        [repr(args[0])] + [repr(arg) for arg in args[1:_MAX_NUM_ARGS+1]]
    )

def test_edge_empty_string_key():
    # Test with an empty string as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("",)); result = codeflash_output # 3.42μs -> 7.51μs (54.5% slower)

def test_edge_none_key():
    # Test with None as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (None,)); result = codeflash_output # 3.25μs -> 7.42μs (56.2% slower)

def test_edge_non_string_key():
    # Test with integer as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (12345,)); result = codeflash_output # 3.24μs -> 7.62μs (57.5% slower)

def test_edge_sensitive_command_case_insensitive():
    # Test sensitive command with mixed case
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AuTh", ("user", "password")); result = codeflash_output # 3.57μs -> 7.72μs (53.8% slower)

def test_edge_truncation_exact():
    # Test truncation where description is exactly max_data_size
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=13)
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 3.61μs -> 8.05μs (55.1% slower)

def test_edge_truncation_needed():
    # Test truncation where description exceeds max_data_size
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=10)
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 4.32μs -> 7.96μs (45.8% slower)

def test_edge_truncation_with_filtered():
    # Truncation with filtered data
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration(max_data_size=10)
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 10.3μs -> 8.92μs (15.7% faster)

def test_edge_args_are_bytes():
    # Test arguments are bytes
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (b"mykey",)); result = codeflash_output # 3.42μs -> 7.54μs (54.7% slower)

def test_edge_args_are_mixed_types():
    # Test arguments are mixed types
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = ("key", 123, None, b"bytes")
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 13.7μs -> 8.31μs (65.1% faster)
    expected = "SET 'key' 123 None b'bytes'"

def test_edge_args_are_empty_tuple():
    # Test arguments is empty tuple
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); result = codeflash_output # 2.14μs -> 6.67μs (67.9% slower)

def test_edge_args_are_list():
    # Test arguments as a list (should still work as sequence)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ["key", "val"]); result = codeflash_output # 8.54μs -> 7.96μs (7.30% faster)


def test_edge_args_are_dict():
    # Test arguments as a dict (should treat as sequence of keys)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = {"a": 1, "b": 2}
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 7.87μs -> 7.86μs (0.102% faster)

def test_edge_args_are_long_string():
    # Test argument is a very long string (truncation)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=20)
    long_str = "x" * 100
    codeflash_output = _get_db_span_description(integration, "SET", (long_str,)); result = codeflash_output # 4.46μs -> 8.43μs (47.1% slower)

# 3. Large Scale Test Cases

def test_large_many_args_no_pii():
    # Test with large number of arguments, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    args = tuple(f"key{i}" for i in range(999))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 28.6μs -> 10.6μs (171% faster)
    # Only first is shown, rest are filtered (up to _MAX_NUM_ARGS)
    expected = "MGET 'key0'" + " [Filtered]" * _MAX_NUM_ARGS

def test_large_many_args_with_pii():
    # Test with large number of arguments, PII enabled
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(f"key{i}" for i in range(999))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 30.9μs -> 9.87μs (213% faster)
    # Only up to _MAX_NUM_ARGS are shown
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])

def test_large_truncation():
    # Test truncation with large description
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=50)
    args = tuple("x" * 10 for _ in range(20))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 31.0μs -> 10.4μs (198% faster)

def test_large_sensitive_command():
    # Test large sensitive command, should always be filtered
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple("x" * 10 for _ in range(20))
    codeflash_output = _get_db_span_description(integration, "AUTH", args); result = codeflash_output # 5.42μs -> 9.30μs (41.8% slower)

def test_large_args_are_large_numbers():
    # Test with large integer arguments
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(10**6 + i for i in range(_MAX_NUM_ARGS + 1))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 27.6μs -> 9.38μs (194% faster)
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])

def test_large_args_are_large_bytes():
    # Test with large bytes arguments
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(b"x" * 100 for _ in range(_MAX_NUM_ARGS + 1))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 30.2μs -> 9.35μs (223% faster)
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
```

</details>


To edit these changes `git checkout
codeflash/optimize-_get_db_span_description-mg9vzvxu` and push.


[![Codeflash](https://img.shields.io/badge/Optimized%20with-Codeflash-yellow?style=flat&color=%23ffc428&logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDgwIiBoZWlnaHQ9ImF1dG8iIHZpZXdCb3g9IjAgMCA0ODAgMjgwIiBmaWxsPSJub25lIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPgo8cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGNsaXAtcnVsZT0iZXZlbm9kZCIgZD0iTTI4Ni43IDAuMzc4NDE4SDIwMS43NTFMNTAuOTAxIDE0OC45MTFIMTM1Ljg1MUwwLjk2MDkzOCAyODEuOTk5SDk1LjQzNTJMMjgyLjMyNCA4OS45NjE2SDE5Ni4zNDVMMjg2LjcgMC4zNzg0MThaIiBmaWxsPSIjRkZDMDQzIi8+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNMzExLjYwNyAwLjM3ODkwNkwyNTguNTc4IDU0Ljk1MjZIMzc5LjU2N0w0MzIuMzM5IDAuMzc4OTA2SDMxMS42MDdaIiBmaWxsPSIjMEIwQTBBIi8+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNMzA5LjU0NyA4OS45NjAxTDI1Ni41MTggMTQ0LjI3NkgzNzcuNTA2TDQzMC4wMjEgODkuNzAyNkgzMDkuNTQ3Vjg5Ljk2MDFaIiBmaWxsPSIjMEIwQTBBIi8+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNMjQyLjg3MyAxNjQuNjZMMTg5Ljg0NCAyMTkuMjM0SDMxMC44MzNMMzYzLjM0NyAxNjQuNjZIMjQyLjg3M1oiIGZpbGw9IiMwQjBBMEEiLz4KPC9zdmc+Cg==)](https://codeflash.ai)

Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
### Description

openai uses `Omit` now instead of `NotGiven`

openai/openai-python@8260288


#### Issues

* resolves: getsentry#4923
* resolves: PY-1885
Check the `call_type` value to distinguish embeddings from chats. The
`client` decorator sets `call_type` by introspecting the function name
and wraps all of the top-level `litellm` functions. If users import from
`litellm.llms`, embedding calls still may appear as chats, but the input
callback we provide does not have enough information in that case.

Closes getsentry#4908
### Description
when async generators throw a `GeneratorExit` we end up with

```
ValueError: <Token var=<ContextVar name='current_scope' default=None at 0x7f04cf05fb50> at 0x7f04ceb17340> was created in a different Context
```

so just catch that and rely on GC to cleanup the contextvar since we
can't be smarter than that anyway for this case.

#### Issues
* resolves: getsentry#4925 
* resolves: PY-1886
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI

Projects

None yet

Development

Successfully merging this pull request may close these issues.