-
Notifications
You must be signed in to change notification settings - Fork 31.7k
Fix failing Owlv2ModelIntegrationTest & OwlViTModelIntegrationTest
#43182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix failing Owlv2ModelIntegrationTest & OwlViTModelIntegrationTest
#43182
Conversation
|
run-slow: owlv2, owlvit |
|
This comment contains models: ["models/owlv2", "models/owlvit"] |
CI Results✅ No failing test specific to this PR 🎉 ! |
vasqu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, base owlvit might need bigger tols again? Seeing it failing with slightly higher diffs there
fdc869c to
10733e6
Compare
Yes, Updated the PR. Thanks for the review! |
|
run-slow: owlvit |
|
This comment contains models: ["models/owlvit"] |
CI Results✅ No failing test specific to this PR 🎉 ! |
| [[0.0680, 0.0422, 0.1347], [0.2071, 0.0450, 0.4146], [0.2000, 0.0418, 0.3476]] | ||
| ).to(torch_device) | ||
| torch.testing.assert_close(outputs.pred_boxes[0, :3, :3], expected_slice_boxes, rtol=1e-4, atol=1e-4) | ||
| torch.testing.assert_close(outputs.pred_boxes[0, :3, :3], expected_slice_boxes, rtol=1e-2, atol=1e-2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tolerance is a bit too high, do we really
- need to change that many
- have this high tol, i.e. is 1e-3 maybe enough instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have this high tol, i.e. is 1e-3 maybe enough instead
Yes, I thought of setting it to 1e-3 but the tests are failing with a difference of 0.004, ~0.002. So, went with 1e-2.
need to change that many
All tests are passing for me locally. So, i am not able to reproduce the failures (No A10 GPU 😅). So, I've made the change for all asserts among the failing tests. But yeah ideally i think some of them can be kept at 1e-3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I will update the values in this PR myself then. Probably next week tho
Running on a10, rather keep correct values than update tols arbitrarily high
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, updated the values just now
cc @ydshieh if this is ok, the values only differ slightly, seems to be a GPU diff
|
[For maintainers] Suggested jobs to run (before merge) run-slow: owlv2, owlvit |
|
run-slow: owlv2, owlvit |
|
This comment contains models: ["models/owlv2", "models/owlvit"] |
CI Results✅ No failing test specific to this PR 🎉 ! |
What does this PR do?
Fixes failing Owlv2ModelIntegrationTest & OwlViTModelIntegrationTest
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@vasqu