-
Notifications
You must be signed in to change notification settings - Fork 747
Skip exp float16 test in XNNPACk #12097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/12097
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 28 Pending, 1 Unrelated FailureAs of commit 2195fd8 with merge base db8ef6b ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot cherry-pick --onto release/0.7 -c fixnewfeature |
(cherry picked from commit 268806c)
Cherry picking #12097The cherry pick PR is at #12119 and it is recommended to link a fixnewfeature cherry pick PR with an issue. The following tracker issues are updated: Details for Dev Infra teamRaised by workflow job |
|
|
||
| # TODO (leafs1): Fix flaky tests. Land fix asap | ||
| # and cherry-pick onto release/0.7 branch | ||
| @unittest.skip(reason="For float16, numerical discepancies are too high") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mcr229 should we increase atol/rtol for this, or normalize/range-select inputs?
No description provided.