-
Notifications
You must be signed in to change notification settings - Fork 22
Get throughput from output for async functions #739
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get throughput from output for async functions #739
Conversation
This reverts commit 96211b8.
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
The optimization achieves a 25% speedup by **eliminating redundant AST node creation** inside the loop. **Key change:** The `timeout_decorator` AST node is now created once before the loop instead of being recreated for every test method that needs it. In the original code, this AST structure was built 3,411 times during profiling, consuming significant time in object allocation and initialization. **Why this works:** AST nodes are immutable once created, so the same `timeout_decorator` instance can be safely appended to multiple method decorator lists. This eliminates: - Repeated `ast.Call()` constructor calls - Redundant `ast.Name()` and `ast.Constant()` object creation - Multiple attribute assignments for the same decorator structure **Performance characteristics:** The optimization is most effective for large test classes with many test methods (showing 24-33% improvements in tests with 500+ methods), while having minimal impact on classes with few or no test methods. This makes it particularly valuable for comprehensive test suites where classes commonly contain dozens of test methods. The line profiler shows the AST node creation operations dropped from ~3,400 hits to just ~25 hits, directly correlating with the observed speedup.
⚡️ Codeflash found optimizations for this PR📄 26% (0.26x) speedup for
|
⚡️ Codeflash found optimizations for this PR📄 69% (0.69x) speedup for
|
…25-09-22T19.41.32 ⚡️ Speed up method `AsyncCallInstrumenter.visit_ClassDef` by 26% in PR #739 (`get-throughput-from-output`)
|
This PR is now faster! 🚀 @KRRT7 accepted my optimizations from: |
⚡️ Codeflash found optimizations for this PR📄 50% (0.50x) speedup for
|
add End to end test for async optimization
| @@ -1,19 +0,0 @@ | |||
| name: Lint | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we removing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll restore it later. it's behaving differently locally vs CI
| line_id = os.environ["CODEFLASH_CURRENT_LINE_ID"] | ||
| loop_index = int(os.environ["CODEFLASH_LOOP_INDEX"]) | ||
| test_module_name, test_class_name, test_name = extract_test_context_from_frame() | ||
| test_module_name, test_class_name, test_name = extract_test_context_from_env() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in a line below why do we have a default of 0 - iteration = os.environ.get("CODEFLASH_TEST_ITERATION", "0")
|
|
||
| blocklist_args = [f"-p no:{plugin}" for plugin in BEHAVIORAL_BLOCKLISTED_PLUGINS if plugin != "cov"] | ||
|
|
||
| logger.info(f"{' '.join(coverage_cmd + common_pytest_args + blocklist_args + result_args + test_files)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
info level looks too high to me for this log
| # When throughput data is available, accept if EITHER throughput OR runtime improves significantly | ||
| throughput_acceptance = throughput_improved and throughput_is_best | ||
| runtime_acceptance = runtime_improved and runtime_is_best | ||
| return throughput_acceptance or runtime_acceptance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this might have cases where one improves and the other degrades. This might require more thought in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, currently, runtime "degrades" codeflash-ai/optimize-me#121 but throughput improves, this still needs work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approving but leaving a few comments
PR Type
Enhancement, Tests, Bug fix
Description
Derive test context from pytest hooks
Add async throughput calculation support
Extend baseline model with throughput
Update tests for new interfaces
Diagram Walkthrough
File Walkthrough
6 files
Replace stack introspection with env-based test contextAdd async throughput to baseline modelPlumb results for throughput; compute and store async throughputAdd helpers to compute throughput from stdoutSet/clear test context env vars per testLog executed pytest command for visibility8 files
Adapt to new API and class-name expectationsProvide env context; fix sync context extraction testUpdate for triple-return API and expectationsRemove obsolete stack-based context testsAdjust tests to new run_and_parse_tests signatureUpdate behavioral/perf/line profile calls to new APIAlign to new return tuple from test runnerAdapt pickle patch tests to new API1 files
Update launch args to async concurrency example