⚡️ Speed up function is_function_being_optimized_again by 44% in PR #275 (dont-optimize-repeatedly-gh-actions)
#290
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #275
If you approve this dependent PR, these changes will be merged into the original PR branch
dont-optimize-repeatedly-gh-actions.📄 44% (0.44x) speedup for
is_function_being_optimized_againincodeflash/api/cfapi.py⏱️ Runtime :
2.79 milliseconds→1.94 milliseconds(best of74runs)📝 Explanation and details
Here is the optimized version of your program, focusing on speeding up the slow path in
make_cfapi_request, which is dominated byjson.dumps(payload, indent=None, default=pydantic_encoder)and the use ofrequests.post(..., data=json_payload, ...).Key optimizations.
requests.post(..., json=payload, ...): This letsrequestsdo the JSON serialization more efficiently (internally usesjson.dumps). Furthermore,requestswill add theContent-Type: application/jsonheader if you use thejsonargument.default=pydantic_encoderif payload contains objects requiring it. If not, the standard encoder is much faster. You can try a direct serialization, and fallback if aTypeErroris raised..upper()inside the POST/GET dispatch by normalizing early.with all comments preserved and only modified/added where code changed.
Explanation of biggest win:
The largest bottleneck was in JSON encoding and in manually setting the content-type header. Now,
requests.post(..., json=payload)is used for the fastest path in the vast majority of requests, only falling back to a slower path if necessary. This should substantially speed up both serialization and POST.This approach is backward-compatible and will produce exactly the same results as before.
✅ Correctness verification report:
🌀 Generated Regression Tests Details
To edit these changes
git checkout codeflash/optimize-pr275-2025-06-05T20.35.24and push.