@@ -123,22 +123,22 @@ def perfect_reconstruction_loss(
123123 ) -> Tuple [torch .Tensor , torch .Tensor , torch .Tensor ]:
124124 """Return the perfect reconstruction loss.
125125
126- Strang 107: Assuming alias cancellation holds:
127- P(z) = F(z)H(z)
128- Product filter P(z) + P(-z) = 2.
129- However, since alias cancellation is implemented as a soft constraint:
130- P_0 + P_1 = 2
131- Somehow NumPy and PyTorch implement convolution differently.
132- For some reason, the machine learning people call cross-correlation
133- convolution.
134- https://discuss.pytorch.org/t/numpy-convolve-and-conv1d-in-pytorch/12172
135- Therefore for true convolution, one element needs to be flipped.
136-
137126 Returns:
138127 list: The numerical value of the alias cancellation loss,
139128 as well as both intermediate values for analysis.
140129
141130 """
131+ # Strang 107: Assuming alias cancellation holds:
132+ # P(z) = F(z)H(z)
133+ # Product filter P(z) + P(-z) = 2.
134+ # However, since alias cancellation is implemented as a soft constraint:
135+ # P_0 + P_1 = 2
136+ # Somehow NumPy and PyTorch implement convolution differently.
137+ # For some reason, the machine learning people call cross-correlation
138+ # convolution.
139+ # https://discuss.pytorch.org/t/numpy-convolve-and-conv1d-in-pytorch/12172
140+ # Therefore for true convolution, one element needs to be flipped.
141+
142142 dec_lo , dec_hi , rec_lo , rec_hi = self .filter_bank
143143 # polynomial multiplication is convolution, compute p(z):
144144 pad = dec_lo .shape [0 ] - 1
0 commit comments