⚡️ Speed up function _cached_joined by 14,223%
          #393
        
          
      
                
     Closed
            
            
          
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
📄 14,223% (142.23x) speedup for
_cached_joinedincode_to_optimize/code_directories/simple_tracer_e2e/workload.py⏱️ Runtime :
135 milliseconds→946 microseconds(best of72runs)📝 Explanation and details
Here’s a version that runs faster by avoiding the overhead of functools.lru_cache and the creation of tuples/keys for the cache.
For small integer ranges, use a
listto store results and return directly, which is the fastest possible cache for sequential integer keys.The use of
" ".join(map(str, ...))is already optimal for the join step, so we preserve it.Notes:
lru_cachebecause it avoids the overhead of dict hashing, and just uses a fast list lookup.✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-_cached_joined-mccuw35oand push.