You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm working on using detectron2 with a custom dataset where my goal is at least to have a RPN work effectively and closely model my dataset. My dataset only contains grayscale images. Currently, my custom loader converts the images into RGB and the model's input is set for RGB input specifically. I built a method to take my current dataset custom trainer using the documentation, but I find myself having non-useful output despite training with my dataset for several iterations.
I've attempted to alter my configuration settings adjusting the base config and pre-trained datasets. Despite adjusting the number of classes setting for the different base networks, I still get an output that doesn't match the three classes in my dataset and instead seems to correspond to classes which are not defined in the dataset. Playing with other settings does seem to influence how the model learns, but the quality of the output is poor. My goal is to have a model which can at least replicate the input ROIs. If the model can also interpret classes, that would also be helpful.
Should my instance proposals match the input dataset more? What is the best way to evaluate how well my model is learning when my output seems to not match my current loss function? Is there some approach necessary for working with grayscale images that I need to use? Should I attempt to setup the models with pytorch alone without detectron2 and compare the output?
Any help, suggestions, or points in a direction would be really helpful/appreciated.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I'm working on using detectron2 with a custom dataset where my goal is at least to have a RPN work effectively and closely model my dataset. My dataset only contains grayscale images. Currently, my custom loader converts the images into RGB and the model's input is set for RGB input specifically. I built a method to take my current dataset custom trainer using the documentation, but I find myself having non-useful output despite training with my dataset for several iterations.
I've attempted to alter my configuration settings adjusting the base config and pre-trained datasets. Despite adjusting the number of classes setting for the different base networks, I still get an output that doesn't match the three classes in my dataset and instead seems to correspond to classes which are not defined in the dataset. Playing with other settings does seem to influence how the model learns, but the quality of the output is poor. My goal is to have a model which can at least replicate the input ROIs. If the model can also interpret classes, that would also be helpful.
Should my instance proposals match the input dataset more? What is the best way to evaluate how well my model is learning when my output seems to not match my current loss function? Is there some approach necessary for working with grayscale images that I need to use? Should I attempt to setup the models with pytorch alone without detectron2 and compare the output?
Any help, suggestions, or points in a direction would be really helpful/appreciated.
Beta Was this translation helpful? Give feedback.
All reactions