-
Notifications
You must be signed in to change notification settings - Fork 630
[TORCH] Add support for aten.hinge_embedding_loss Op #4227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 4 commits
712b20f
8d8c30b
a6b1da8
9aa8605
232c2b8
5d960ff
c70507b
594c301
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2455,6 +2455,95 @@ def BinaryCrossEntropyWithLogitsStaticModule_basic(module, tu: TestUtils): | |
# ============================================================================== | ||
|
||
|
||
class HingeEmbeddingLossBasicModule(torch.nn.Module): | ||
def __init__(self): | ||
super().__init__() | ||
|
||
@export | ||
@annotate_args( | ||
[ | ||
None, | ||
([-1, -1, -1], torch.float32, True), | ||
([-1, -1, -1], torch.float32, True), | ||
] | ||
) | ||
def forward(self, input, target): | ||
return torch.ops.aten.hinge_embedding_loss( | ||
input, target, margin=1.5, reduction=1 | ||
) | ||
|
||
|
||
@register_test_case(module_factory=lambda: HingeEmbeddingLossBasicModule()) | ||
def HingeEmbeddingLossBasicModule_basic(module, tu: TestUtils): | ||
module.forward(tu.rand(1, 2, 3), tu.rand(1, 2, 3)) | ||
|
||
|
||
class HingeEmbeddingLossReductionMeanModule(torch.nn.Module): | ||
def __init__(self): | ||
super().__init__() | ||
|
||
@export | ||
@annotate_args( | ||
[ | ||
None, | ||
([-1, -1], torch.float32, True), | ||
([-1, -1], torch.float32, True), | ||
] | ||
) | ||
def forward(self, input, target): | ||
return torch.ops.aten.hinge_embedding_loss(input, target, reduction=1) | ||
|
||
|
||
@register_test_case(module_factory=lambda: HingeEmbeddingLossReductionMeanModule()) | ||
def HingeEmbeddingLossReductionMeanModule_basic(module, tu: TestUtils): | ||
module.forward(tu.rand(8, 1), tu.rand(1, 1)) | ||
|
||
|
||
class HingeEmbeddingLossReductionSumModule(torch.nn.Module): | ||
def __init__(self): | ||
super().__init__() | ||
|
||
@export | ||
@annotate_args( | ||
[ | ||
None, | ||
([-1, -1], torch.float32, True), | ||
([-1, -1], torch.float32, True), | ||
] | ||
) | ||
def forward(self, input, target): | ||
return torch.ops.aten.hinge_embedding_loss(input, target, reduction=2) | ||
|
||
|
||
@register_test_case(module_factory=lambda: HingeEmbeddingLossReductionSumModule()) | ||
def HingeEmbeddingLossReductionSumModule_basic(module, tu: TestUtils): | ||
module.forward(tu.rand(2, 5), tu.rand(1, 1)) | ||
|
||
|
||
class HingeEmbeddingLossReductionNoneModule(torch.nn.Module): | ||
def __init__(self): | ||
super().__init__() | ||
|
||
@export | ||
@annotate_args( | ||
[ | ||
None, | ||
([-1, -1], torch.float32, True), | ||
([-1], torch.float32, True), | ||
] | ||
) | ||
def forward(self, input, target): | ||
return torch.ops.aten.hinge_embedding_loss(input, target, margin=1.0) | ||
|
||
|
||
@register_test_case(module_factory=lambda: HingeEmbeddingLossReductionNoneModule()) | ||
def HingeEmbeddingLossReductionNoneModule_basic(module, tu: TestUtils): | ||
module.forward(tu.rand(8, 5), tu.rand(1)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. All 3 of these tests have only a single value as a target; as a result, not all the paths of the lowering will get tested. Ideally, for testing purposes, the target tensor should contain a mix of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the review @vivekkhandelwal1
For this question, I’ve addressed that in my earlier comment here: #4227 (comment). @vivekkhandelwal1. |
||
|
||
|
||
# ============================================================================== | ||
|
||
|
||
class TraceModule(torch.nn.Module): | ||
def __init__(self) -> None: | ||
super().__init__() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of doing all this, you can just do:
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vivekkhandelwal1 Thanks for the suggestion, I did try the simplified version initially, but it caused numerical validation errors in some test cases. This happens because the target tensor can sometimes have values other than just -1 and 1.
To handle this properly and stay consistent with PyTorch's semantics, I decided to explicitly check for both target == 1 and target == -1. This way, the behavior stays correct even if target have values other than just -1 and 1.
Eg:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In what cases and how? Since the definition says that it can contain only
-1
and1
.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the reply @vivekkhandelwal1, I got reference from the Pytorch native implementation https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/Loss.cpp#L182. While the official definition of hinge_embedding_loss states that the target should contain only -1 and 1, the native implementation doesn’t enforce this restriction and handles arbitrary values using at::where(target != 1, ) and at::where(target != -1,). So to stay with the Pytorch behaviour, I followed the same logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. In that case, it would be better to add this link and the justification in the comment. So that, the future users/contributors may not get confused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the reply @vivekkhandelwal1. I have added the comments as per your suggestion.