Replies: 2 comments 3 replies
-
I think you can follow this approach in general https://discuss.pytorch.org/t/how-can-i-extract-intermediate-layer-output-from-loaded-cnn-model/77301 for globalnet: import torch
from monai.networks.blocks import Warp
from monai.networks.nets import GlobalNet
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
input_param = {
"image_size": (16, 16),
"spatial_dims": 2,
"in_channels": 1,
"num_channel_initial": 16,
"depth": 1,
"out_kernel_initializer": "kaiming_uniform",
"out_activation": None,
"pooling": True,
"concat_skip": True,
"encode_kernel_sizes": 3,
}
net = GlobalNet(**input_param)
net.output_block.fc.register_forward_hook(get_activation("fc"))
warp_layer = Warp()
img = torch.randn((1, 1, 16, 16))
result = net(img)
warped = warp_layer(img, result)
print(activation) |
Beta Was this translation helpful? Give feedback.
3 replies
-
now it's also addressed by #6459 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When completing registration, rather than warping the moving image to get a predicted image, is there a way to get the motion parameters for affine transformations between moving and fixed images through GlobalNet or any other network. Thank you in advance for the help.
Beta Was this translation helpful? Give feedback.
All reactions