-
Notifications
You must be signed in to change notification settings - Fork 3
check for Bottom in vmath rewrite #585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
☂️ Python Coverage
Overall Coverage
New FilesNo new covered files... Modified Files
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR addresses type inference bugs that cause analysis results to reach Bottom type in certain cases. The fix prevents vmath binary operation desugaring when either operand has Bottom type, avoiding incorrect transformations.
Key changes:
- Added early return in
DesugarBinOp.rewrite_Statement()to skip desugaring when operands areBottomtype - Added test cases documenting known type inference failures with
@pytest.mark.xfail()markers - Fixed test data type from integer to float list in
test_typed_kernel_add()to match type annotations
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
src/kirin/dialects/vmath/rewrites/desugar.py |
Added check to prevent desugaring when either operand is Bottom type, returning empty RewriteResult() instead |
test/dialects/vmath/test_desugar.py |
Added pytest import, marked 5 tests as expected failures with @pytest.mark.xfail(), added new multiplication test case, and fixed test data types |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| assert np.allclose(np.asarray(res), np.array([6, 7, 8])) | ||
|
|
||
|
|
||
| @pytest.mark.xfail() |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @pytest.mark.xfail() decorator should include a reason parameter to explain why this test is expected to fail. Other tests in the codebase consistently use @mark.xfail(reason="...") to document the known issue.
For example, based on the PR description, consider: @pytest.mark.xfail(reason="type inference results in Bottom type for these cases")
| return x - y | ||
|
|
||
|
|
||
| @pytest.mark.xfail() |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @pytest.mark.xfail() decorator should include a reason parameter to explain why this test is expected to fail. Other tests in the codebase consistently use @mark.xfail(reason="...") to document the known issue.
For example, based on the PR description, consider: @pytest.mark.xfail(reason="type inference results in Bottom type for these cases")
| return mult_kernel(x=3.0, y=[3.0, 4.0, 5.0]) | ||
|
|
||
|
|
||
| @pytest.mark.xfail() |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @pytest.mark.xfail() decorator should include a reason parameter to explain why this test is expected to fail. Other tests in the codebase consistently use @mark.xfail(reason="...") to document the known issue.
For example, based on the PR description, consider: @pytest.mark.xfail(reason="type inference results in Bottom type for these cases")
| assert np.allclose(np.asarray(res), np.asarray([0, 3, 6])) | ||
|
|
||
|
|
||
| @pytest.mark.xfail() |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @pytest.mark.xfail() decorator should include a reason parameter to explain why this test is expected to fail. Other tests in the codebase consistently use @mark.xfail(reason="...") to document the known issue.
For example, based on the PR description, consider: @pytest.mark.xfail(reason="type inference results in Bottom type for these cases")
| return add_kernel(x=3.0, y=[3.0, 4, 5]) | ||
|
|
||
|
|
||
| @pytest.mark.xfail() |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The @pytest.mark.xfail() decorators should include a reason parameter to explain why these tests are expected to fail. Other tests in the codebase consistently use @mark.xfail(reason="...") to document the known issue.
For example, based on the PR description, consider: @pytest.mark.xfail(reason="type inference results in Bottom type for these cases")
kaihsin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, for context @Roger-luo this is causing some issue with downstream behavior. We should address this typeinfer issue properly in another PR
|
Yay! This fixed a tricky bug in my pipeline. Any chance of backporting it to 0.21 @neelay893? |
|
Yes this should be able to backport to 0.21 too. |
Type inference bugs cause analysis results to `Bottom` in certain cases. Add a check for bottom to not implement vmath binop desugaring if either argument is type `Bottom`.
Type inference bugs cause analysis results to
Bottomin certain cases. Add a check for bottom to not implement vmath binop desugaring if either argument is typeBottom.