Inverting sensitivity map #4936
-
Hi, I need some help on how to apply inverse transformation for the output of sensitivity-map/grad-cam map. The code and prediction maps work fine, however for visualization purpose I must change back the prediction map to original image size. My input images have various image sizes, therefore in my transformation I just resize the images to 64x64x64. Now, I need to invert back. The code based my my logic should work. The output of I have tried both **Computing Sensitivity Map, and transform it back (inverse trasnform) to original input size: ** #Define transforms
# test_transforms = Compose([EnsureChannelFirstd(keys=['image']),
# Resized(keys=['image'], spatial_size=(64, 64, 64)),
# ScaleIntensityRanged(keys=['image'],
# a_min=0,
# a_max=255,
# b_min=0.0,
# b_max=1.0)])
test_transforms = Compose([EnsureChannelFirst(),Resize(spatial_size=image_size),ScaleIntensityRange(a_min=0,
a_max=255, b_min=0.0, b_max=1.0)])
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data_batch in test_loader:
val_images, val_labels = data_batch[0].to(device), data_batch[1].to(device)
# calculate outputs by running images through the network
outputs = model(val_images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += val_labels.size(0)
correct += (predicted == val_labels).sum().item()
# Get the occlusion sensitivity map
occ_sens = monai.visualize.OcclusionSensitivity(nn_module=model, mask_size=mask_size, n_batch=n_batch, stride=stride)
occ_sens_b_box_all = [-1, -1, -1, -1, -1, -1, -1, -1]
occ_result_all, _ = occ_sens(x=val_images, b_box=occ_sens_b_box_all)
occ_result_all = occ_result_all[..., val_labels.argmax().item()]
#occlusion output is not a meta tensor, so convert to metatensor.
occ_result_all = convert_to_tensor(occ_result_all, track_meta = True)
#copy the applied operation and meta data from the image input.
occ_result_all.applied_operations = data_batch[0].to(device).applied_operations
occ_result_all.meta= data_batch[0].to(device).meta
seg_dict = {"image": occ_result_all}
#with allow_missing_keys_mode(test_transforms):
inverted_seg = test_transforms.inverse(seg_dict)
print(inverted_seg.shape) Error: Traceback (most recent call last):
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\transform.py", line 90, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\transform.py", line 54, in _apply_transform
return transform(parameters)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\spatial\dictionary.py", line 600, in inverse
d[key] = self.resizer.inverse(d[key])
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\spatial\array.py", line 890, in inverse
transform = self.pop_transform(data)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\inverse.py", line 206, in pop_transform
return self.get_most_recent_transform(data, key, check, pop=True)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\inverse.py", line 188, in get_most_recent_transform
self.check_transforms_match(all_transforms[-1])
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\inverse.py", line 145, in check_transforms_match
xform_id = transform.get(TraceKeys.ID, "")
AttributeError: 'list' object has no attribute 'get'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "patch_classification_inference.py", line 282, in <module>
calc_accuracy(model, test_loader, test_transforms)
File "patch_classification_inference.py", line 251, in calc_accuracy
inverted_seg = test_transforms.inverse(seg_dict)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\compose.py", line 183, in inverse
data = apply_transform(t.inverse, data, self.map_items, self.unpack_items, self.log_stats)
File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\monai\transforms\transform.py", line 114, in apply_transform
raise RuntimeError(f"applying transform {transform}") from e
RuntimeError: applying transform <bound method Resized.inverse of <monai.transforms.spatial.dictionary.Resized object at 0x00000197B34D17C0> |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 12 replies
-
the overall logic looks great, one detail is that for occ_result in monai.data.decollate_batch(occ_result_all):
# for occ_result in occ_result_all: # this also implicitly decollate
...
inverted = test_transforms.inverse(occ_result)
... (I'm converting this to a discussion) |
Beta Was this translation helpful? Give feedback.
the overall logic looks great, one detail is that
data_batch[0]
andocc_result_all
are 'batches', you can try to decollate before the inverse:(I'm converting this to a discussion)