-
Notifications
You must be signed in to change notification settings - Fork 630
Support decomposition of torch.broadcast_tensors #4253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@sahas3 , @vivekkhandelwal1 can you please review ? The failing checks are unrelated to the change and similar failures are seen for other recent PRs |
Hi @vivekkhandelwal1, can you please provide your feedback ? |
@zjgarvey I am not sure who is the code owner who could review this change. Can you please provide feedback on this if possible ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! A few things to address before approval (most importantly the potentially wrong check on shape index).
lib/Dialect/Torch/Utils/Utils.cpp
Outdated
bool unknownSize = false; | ||
for (auto [idx, shape] : llvm::enumerate(shapes)) { | ||
if (ranks[idx] - i - 1 < shape.size() && | ||
shape[ranks[idx] - i - 1] == kUnknownSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit confusing. 0 <= i < maxRank
, so it seems redundant to check ranks[idx] - i - 1 < ranks[idx]
. Perhaps the check should be ranks[idx] - i - 1 >= 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zjgarvey , this check ensures that out of bounds access to shape is not performed in case input tensors have different ranks. The e2e test BroadcastTensorsModuleList_multiple_ranks covers this case. Let me know if you have a suggestion on making this check easier to read
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "confusing" isn't the correct word. I'm rather certain the check is wrong.
E.g. For ranks = {1, 2}, maxRank = 2
, then when idx = 0
and i=1
, then ranks[idx] - i - 1 = 1 - 1 - 1 = -1
, which you are using to access shape
for the tensor at idx = 0
.
Whereas, ranks[idx] - i - 1 < ranks[idx]
is automatic from the fact that i >= 0
.
Added support for torch.broadcast op and decomposition of broadcast_tensor to a sequence of aten.broadcast_to
closes #4240