⚡️ Speed up function retry_with_backoff by -85%
#118
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 -85% (-0.85x) speedup for
retry_with_backoffinsrc/async_examples/concurrency.py⏱️ Runtime :
802 microseconds→5.23 milliseconds(best of187runs)📝 Explanation and details
The optimization replaces the blocking
time.sleep()call with the non-blockingawait asyncio.sleep(). While this change appears to have a negative runtime impact (-84% speedup) on individual function calls due to the overhead of async sleep machinery, it delivers a significant 21.4% throughput improvement in concurrent scenarios.Key optimization:
time.sleep(0.00001 * attempt)withawait asyncio.sleep(0.00001 * attempt)Why this improves concurrent performance:
The original
time.sleep()is a blocking call that freezes the entire event loop during backoff delays, preventing other coroutines from executing. Theawait asyncio.sleep()yields control back to the event loop, allowing other concurrent operations to proceed.Performance trade-off analysis:
Test case benefits:
This optimization particularly benefits test cases involving:
test_retry_with_backoff_many_concurrent_*- Multiple concurrent retry operationstest_retry_with_backoff_throughput_*- High-volume concurrent processingThe throughput improvement demonstrates that despite individual function overhead, the system-wide performance gains from proper async behavior significantly outweigh the per-call costs in realistic concurrent usage patterns.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-retry_with_backoff-mfw4qs5cand push.