⚡️ Speed up function _cached_joined by 82%
#435
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 82% (0.82x) speedup for
_cached_joinedincode_to_optimize/code_directories/simple_tracer_e2e/workload.py⏱️ Runtime :
69.8 milliseconds→38.4 milliseconds(best of129runs)📝 Explanation and details
Here’s a significantly faster version of your code.
" ".join(map(str, range(number)))is slightly faster and uses less memory.lru_cacheoverhead isn’t necessary if the only cache size you need is 1001 and the argumentnumberis a small integer. It's faster and lower overhead to use a simple dict for caching, and you can control the cache size yourself.Here’s the optimized version.
Notes:
" ".join(map(str, ...))is faster and more memory-efficient than a list comprehension here.Lock/withusage for a slightly faster single-threaded version."map(str, ...)"is used for faster conversion.If you want the absolutely highest performance in a single-threaded setting, drop the Lock.
Either way, you get better performance and lower memory per invocation.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-_cached_joined-mccvx4v9and push.