-
Notifications
You must be signed in to change notification settings - Fork 10
Description
I am trying to extract the feature vector from the trained network of yours. Firstly, I am not able to get a single feature vector for a single image. I do not understand the exact reason behind it.
`import torch
from torch.autograd import Variable
from backbone import Network_D
from PIL import Image
from torchvision.transforms import ToTensor
import numpy
import time
model = Network_D()
model.load_state_dict(torch.load('/home/shubam/Documents/person_reidentification/SphereReID/res/model_final.pkl'))
img_path='/home/shubam/Documents/person_reidentification/SphereReID/dataset/Market-1501-v15.09.15/bounding_box_test/0000_c3s2_105228_06.jpg'
image = Image.open(img_path)
image = ToTensor()(image).unsqueeze(0) # unsqueeze to add artificial first dimension
image = Variable(image)
image = torch.cat((image, image), 0)
image = torch.cat((image, torch.zeros_like(image)), 0)
#print("the size of image is {}".format(image))
t1=time.time()
feature = model.forward(image).detach().cpu().numpy()
print("the feature is {}".format(feature[0, :]))
t2=time.time()
print("time for cpu is {}".format(t2-t1))
feature = model.forward(image).detach().cuda()#.numpy()
print("the feature is {}".format(feature[0, :]))
t3=time.time()
print("time for gpu is {}".format(t3-t2))
`
I get this output:
Traceback (most recent call last):
File "/home/shubam/Documents/person_reidentification/SphereReID/model_converter.py", line 18, in
feature = model.forward(image).detach().cpu().numpy()
File "/home/shubam/Documents/person_reidentification/SphereReID/backbone.py", line 54, in forward
x = self.bn2(x)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/modules/batchnorm.py", line 76, in forward
exponential_average_factor, self.eps)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/functional.py", line 1619, in batch_norm
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 2048])
Secondly, I tried to concatenate zeros like the image to increase the dimensions such that torch.Size([2, 2048)
`image = Variable(image)
image = torch.cat((image, image), 0)
image = torch.cat((image, torch.zeros_like(image)), 0)
#print("the size of image is {}".format(image))
t1=time.time()
feature = model.forward(image).detach().cpu().numpy()
print("the feature is {}".format(feature))
t2=time.time()
print("time for cpu is {}".format(t2-t1))
feature = model.forward(image).detach().cuda()#.numpy()
print("the feature is {}".format(feature))
t3=time.time()
print("time for gpu is {}".format(t3-t2))
`
the output is:
the feature is [[ 0.4966124 0.57359195 0.683048 ... -0.6020484 -0.46173286
0.577405 ]
[-0.4953278 -0.60051304 -0.6328474 ... 0.6000462 0.48753467
-0.6212439 ]]
time for cpu is 0.0762948989868
the feature is tensor([[-0.4958, 0.5337, -0.6106, ..., -0.5830, 0.4491, 0.5779],
[ 0.4971, -0.5606, 0.6608, ..., 0.5810, -0.4233, -0.6217]],
device='cuda:0')
time for gpu is 0.0752019882202
here you can see that both the vectors are different and gives different value in different ways of calculation. could you please suggest me a way to find a feature vector for a given image. Thank you.
Now, you can see that the image with zeros also gets some non zero values as the feature vector.