Skip to content

Update base.py#333

Open
jzakirov wants to merge 2 commits intomasterfrom
compute_grads_fix
Open

Update base.py#333
jzakirov wants to merge 2 commits intomasterfrom
compute_grads_fix

Conversation

@jzakirov
Copy link
Collaborator

@jzakirov jzakirov commented Jan 27, 2023

Closes #332

Proposed Changes

  • Fixed gradient map computation
  • Updated all metrics to use proper kernel shapes (optional, function supports both variants now).

@codecov
Copy link

codecov bot commented Jan 27, 2023

Codecov Report

Merging #333 (cc75629) into master (d04a848) will decrease coverage by 0.07%.
The diff coverage is 71.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #333      +/-   ##
==========================================
- Coverage   92.01%   91.94%   -0.07%     
==========================================
  Files          34       34              
  Lines        2491     2496       +5     
==========================================
+ Hits         2292     2295       +3     
- Misses        199      201       +2     
Flag Coverage Δ
unittests 91.94% <71.42%> (-0.07%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
piq/functional/base.py 88.37% <71.42%> (-3.74%) ⬇️

@jzakirov jzakirov requested review from denproc and snk4tr February 7, 2023 15:20
@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 7, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

Copy link
Collaborator

@denproc denproc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @zakajd,

As you mentioned in #332, we pass the 4D tensor, not 3D. I would suggest to update the doc string instead of expanding the function to a non standard kernel sizes, which could require additional testing. Edge cases: kernels.dim() = 5 # or 2.

I checked the library for every metric that uses gradient_map. Usually, we consider only luminance channel to be provided for the function. I assume that there are other cases outside the library, where it could be helpful to compute gradient map per channel, i.e. using the gradient_map as standalone function.
If we follow this path, let's add assert on kernel size and tests to cover the features of the function.

kernels = kernels.repeat(C, 1, 1, 1)

# Process each channel separately using group convolution.
grads = torch.nn.functional.conv2d(x, kernels.to(x), groups=C, padding=padding)
Copy link
Collaborator

@denproc denproc Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implicit transfer between devices is not needed as current implementation in master handles the device and dtype properties.

Suggested change
grads = torch.nn.functional.conv2d(x, kernels.to(x), groups=C, padding=padding)
grads = torch.nn.functional.conv2d(x, kernels, groups=C, padding=padding)

Args:
x: Tensor with shape (N, C, H, W).
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W)
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W) or (k_N, 1, k_H, k_W)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
kernels: Stack of tensors for gradient computation with shape (k_N, k_H, k_W) or (k_N, 1, k_H, k_W)
kernels: Stack of tensors for gradient computation with shape (k_C_out, k_C_in, k_H, k_W). k_C_in equals 1.

Comment on lines 58 to +64
padding = kernels.size(-1) // 2
grads = torch.nn.functional.conv2d(x, kernels, padding=padding)
N, C, H, W = x.shape

# Expand kernel if this is not done already and repeat to match number of groups
if kernels.dim() != 4:
kernels = kernels.unsqueeze(1)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
padding = kernels.size(-1) // 2
grads = torch.nn.functional.conv2d(x, kernels, padding=padding)
N, C, H, W = x.shape
# Expand kernel if this is not done already and repeat to match number of groups
if kernels.dim() != 4:
kernels = kernels.unsqueeze(1)
assert kernels.dim() == 4, f'Expected 4D kernel, got {kernels.dim()}D tensor '
assert kernels.size(1) == 1, f'Expected dimension size of kernel to be equal one for input number of channels, got kernel {kernel.size()} '
assert kernels.size(-1) == kernel_size(-2), f'Expected squared kernel along coast two dimensions, got {kernel.size()}'
padding = kernels.size(-1) // 2
N, C, H, W = x.shape

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implementation of gradient_map does not support >1 channels

2 participants

Comments