-
Notifications
You must be signed in to change notification settings - Fork 553
Open
Description
Hi, Thank you for sharing the code and trained models!
I have a question specific to the demo in your PyTorch implementation. As I understand, you are using a base_size = 512 to which any incoming image is resized regardless of the input dimensions while maintaining the original aspect ratio. Then the inference is run in a grid fashion on crops of 473x473.
My question is to why have a fixed crop_size or a base_size? There is no fully connected layer in the architecture so why we have this fixed size? Is this an arbitrary choice for numbers or there is a solid reason for it?
Thank you for your time!
Best,
Touqeer
Metadata
Metadata
Assignees
Labels
No labels