-
Notifications
You must be signed in to change notification settings - Fork 0
Enabling Integer Matrix Multiplication #86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request enables integer matrix multiplication for NDArray types by implementing a conversion-based approach. Previously, integer matrix multiplication was not supported and would throw an error.
Changes:
- Added integer matrix multiplication support by converting integers to Float64, performing the multiplication, then converting back
- Updated test cases to verify integer matrix multiplication works correctly
- Removed test that verified integer matrix multiplication threw an error
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| src/ndarray/binary.jl | Replaced error-throwing function with implementation that converts integers to Float64 for multiplication, then converts back to original integer type |
| test/tests/gemm.jl | Updated integer test case to verify integer matrix multiplication works instead of testing that it throws an error |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/ndarray/binary.jl
Outdated
| IntermediateType = Float64 | ||
|
|
||
| A_float = cuNumeric.as_type(rhs1, IntermediateType) | ||
| B_float = cuNumeric.as_type(rhs2, IntermediateType) | ||
|
|
||
| C_float = A_float * B_float | ||
| C_int = cuNumeric.as_type(C_float, T) |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Converting through Float64 may cause precision loss or overflow for large integer values. For Int32 values, Float64 has 53 bits of mantissa precision, which can represent all Int32 values exactly. However, for Int64, values larger than 2^53 will lose precision. Consider documenting this limitation or adding a check for Int64 matrices with large values.
Co-authored-by: Copilot <[email protected]>
Just doing a basic cast from int -> float64 -> int.