Replies: 2 comments 2 replies
-
The difference is preprocessing. This is a reference to the defaultpredictor which shows the preprocessing step before applying model(inputs). |
Beta Was this translation helpful? Give feedback.
2 replies
-
I am actually using pertained models from the model zoo. Do you think if
they were trained on 800 as minimum size but I have 500px images, is better
to upscale or its fine to just feed in 500px itself?
…On Thu, May 27, 2021, 5:34 PM Leon Truong ***@***.***> wrote:
you can change the default input sizes using cfg.INPUT.MIN_SIZE_TRAIN
which defaults to 800 as the smallest size.
The aim of resizing in this manner is about maintaining aspect ratio.
in your case training on smaller image sizes wouldn't lose anything as the
images are smaller than the default resize.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3097 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACCG4JONH4UMF2XWL53ORPTTPYYLNANCNFSM45TUG6PA>
.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Getting two different results on the same image using DefaultPredictor vs model(inputs). Here are the two different outputs
Code snippets:
Using model:
Beta Was this translation helpful? Give feedback.
All reactions