-
Notifications
You must be signed in to change notification settings - Fork 22
⚡️ Speed up function postprocess by 210%
#567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
codeflash-ai
wants to merge
6
commits into
optimize-infer
from
codeflash/optimize-postprocess-mde2fc44
Closed
⚡️ Speed up function postprocess by 210%
#567
codeflash-ai
wants to merge
6
commits into
optimize-infer
from
codeflash/optimize-postprocess-mde2fc44
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Saurabh Misra <[email protected]>
Here’s an optimized version of your code with the following improvements. - **Avoid repeated computation**: np.exp(logits) was computed more than once per value in sigmoid_stable. Cache where possible. - **Avoid flattening with reshape**: Use .ravel() for a fast view rather than .reshape if you don't need a copy. - **Vectorized selection**: Use np.argpartition for O(n) partial selection instead of full sort (np.argsort) when only top K needed; sort only those afterward for correct order. - **Preallocate output**: Preallocate fixed-size array when possible. Here’s the improved code. **Notes:** - `sigmoid_stable` does not call np.exp(x) and np.exp(-x) separately for each value, instead using `np.exp(-np.abs(x))`, making it slightly faster and more numerically stable. - Uses `np.argpartition(..., k)` to efficiently get top K indices. Only these are then sorted by value. - `.ravel()` instead of `.reshape(-1)` for flattening, which is faster when possible. - Output structure and function signatures are preserved. - All comments are kept unless relating to changed code. This should noticeably speed up use on large arrays or large batch sizes.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 210% (2.10x) speedup for
postprocessincodeflash/process/infer.py⏱️ Runtime :
6.64 milliseconds→2.14 milliseconds(best of563runs)📝 Explanation and details
Here’s an optimized version of your code with the following improvements.
Here’s the improved code.
Notes:
sigmoid_stabledoes not call np.exp(x) and np.exp(-x) separately for each value, instead usingnp.exp(-np.abs(x)), making it slightly faster and more numerically stable.np.argpartition(..., k)to efficiently get top K indices. Only these are then sorted by value..ravel()instead of.reshape(-1)for flattening, which is faster when possible.This should noticeably speed up use on large arrays or large batch sizes.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
⏪ Replay Tests and Runtime
To edit these changes
git checkout codeflash/optimize-postprocess-mde2fc44and push.