⚡️ Speed up function cosine_similarity_top_k by 13%
#201
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 13% (0.13x) speedup for
cosine_similarity_top_kinsrc/statistics/similarity.py⏱️ Runtime :
5.37 milliseconds→4.75 milliseconds(best of250runs)📝 Explanation and details
The optimized code achieves a 13% speedup through several key optimizations in the
cosine_similarityfunction:Key Optimizations:
More efficient array conversion: Uses
np.asarray(X, dtype=np.float64)instead ofnp.array(X). This avoids unnecessary copies when the input is already a numpy array and ensures consistent float64 precision.Broadcasting optimization: Adds
keepdims=Trueto norm calculations, allowingX_norm @ Y_norm.Tinstead of the more expensivenp.outer(X_norm, Y_norm). This reduces memory allocation and leverages optimized matrix multiplication.Improved NaN/Inf handling: Replaces the boolean indexing approach with
np.copyto(..., where=~np.isfinite(...))andnp.errstatecontext manager, which is more efficient for in-place operations.Minor variable caching: Stores
flat_scores = score_array.flatten()to avoid repeated flatten operations.Performance Impact by Test Case:
Why It's Faster:
The
np.outeroperation in the original creates a full matrix multiplication, while the optimized version uses broadcasting with@operator which is more cache-friendly and leverages BLAS optimizations. Thekeepdims=Trueeliminates the need for reshaping operations, andnp.asarraywith explicit dtype avoids potential type inference overhead.The optimization maintains identical output behavior while being particularly effective for workloads involving similarity computations on larger datasets or scenarios with many zero vectors.
✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
🔎 Click to see Concolic Coverage Tests
codeflash_concolic_ep6oyi9w/tmpwu009xfa/test_concolic_coverage.py::test_cosine_similarity_top_kcodeflash_concolic_ep6oyi9w/tmpwu009xfa/test_concolic_coverage.py::test_cosine_similarity_top_k_2To edit these changes
git checkout codeflash/optimize-cosine_similarity_top_k-mjhwfxfpand push.