Replies: 2 comments 2 replies
-
@ppwwyyxx Any pointers? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I found a solution.By inspecting the DefaultPredictor source code, I found that we need to do an additional preprocessing step for the image. You can use the following code snippet to prerpo
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @ppwwyyxx ,
====Some background:====
I am able to run a pre-trained model on any input image to get mask predictions as well as the bounding boxes. I have set my model in eval mode, as required.
In doing so, I also display the output of my model as below:
output = model([inputs])
print('output instances are:', output[0]['instances'])
This gives me 7 class predictions for different objects in my image, i.e, num_instances = 7.
So far so good.
====My question:====
I am also getting the mask features from the same pre-trained model used above, using the following code:
features = model.backbone(images.tensor)
proposals, _ = model.proposal_generator(images, features)
instances, _ = model.roi_heads(images, features, proposals)
(I have omitted the required two lines to get mask features for brevity.)
The num_instances here is 5, which is different from when I run the model without any incisions, as described earlier (it was 7 there).
I am curious to know why this inconsistency.
Can you clarify? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions