-
Notifications
You must be signed in to change notification settings - Fork 4
Description
thanks a lot for your help.
as you know, I am working in the image reconstruction field.
first step:
I extracted image features with MatConvNet that didn't work well in reconstruction. then, I extracted Caffe image features and then I extracted brain features with fastl2lir and finally, I did reconstruction and my results were very close to your result.
second step:
I update my Ubuntu to the Ubuntu 22 version and I couldn't install Caffe on it. Therefore, I used Torch for image feature extraction. Then, I used Torch image features in Fasl2lir and extract brain features. recently. you uploaded new codes that did reconstruction with Torch. I ran this code with my torch features, but I received big errors. when I ran this code with your data, I received big errors, too. I report this problem to you and you modified your code and errors decreased with your features.
But, when I run this code with my features that extract in the same way (fastl2lir) I still get big errors. The only difference is that I extract image features with Torch. I am attached my code. Please look at them.
I need to use Torch because I want to use resnet and I don't have a strong system in Iran to train this network in Caffe and get resnet.caffemodel. please help me.
for example in this code I extract conv5_1 image features in Torch:
import numpy as np
from google.colab.patches import cv2_imshow
import pandas as pd
import cv2
import os
import torch
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
from torch import optim, nn
from torchvision import models, transforms
import torchvision.io as io
import torchvision.transforms as transforms
from PIL import Image
import imghdr
from bdpy.dataform import Features, load_array, save_array
Load the VGG19 model
vgg19 = models.vgg19(pretrained=True)
Access the features module
features = vgg19.features
Define a new model that outputs features from the conv3_1 layer
#model = nn.Sequential(features[0:12])
Set the model to evaluation mode
vgg19.eval()
import numpy as np
from google.colab.patches import cv2_imshow
import pandas as pd
import cv2
import os
import torch
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
from torch import optim, nn
from torchvision import models, transforms
import torchvision.io as io
import torchvision.transforms as transforms
from PIL import Image
import imghdr
from bdpy.dataform import Features, load_array, save_array
Load the VGG19 model
vgg19 = models.vgg19(pretrained=True)
Access the features module
features = vgg19.features
Define a new model that outputs features from the conv3_1 layer
#model = nn.Sequential(features[10])
conv3_1_features = nn.Sequential(*list(vgg19.features.children())[:29])
Set the model to evaluation mode
conv3_1_features.eval()
Load an image
data_dir='/content/drive/MyDrive/image_feature_python/resultes/pytorch_image_feat_training/pytorch/VGG19/conv5_1/'
img_dir='/content/drive/MyDrive/matconvnet/data/training'
Get image files
imagefiles = []
for root, dirs, files in os.walk(img_dir):
imagefiles = [os.path.join(root, f)
for f in files
if imghdr.what(os.path.join(root, f))]
imgs_path = np.array(imagefiles)
#image name
images_name=[]
images=imagefiles
for f in files:
name=f[:-5]
images_name.append(name)
print ('Image num: %d' % len(imagefiles))
for n in range(imgs_path.shape[0]):
print(n)
img = io.read_image(imgs_path[n])
# Preprocess the image for the VGG16 model
# Load an image using PIL
pil_image = Image.open(imgs_path[n]).convert('RGB')
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
#input_tensor = preprocess(pil_image)
#input_batch = input_tensor.unsqueeze(0)
input_tensor = preprocess(pil_image).unsqueeze(0)
# Pass the input tensor through the model up to the conv3_1 layer
features = conv3_1_features(input_tensor)
# Convert to NumPy Array
features_numpy = features.detach().numpy()
features = np.array(features_numpy)
print(features.shape)
feat = np.reshape(features,(1,512,14,14))
savefile = os.path.join(data_dir, '%s.mat' % images_name[n])
#Save
save_array(savefile, feat, key='feat', dtype=np.float32, sparse=False)