Skip to content

Commit d6651c1

Browse files
committed
docs update.
1 parent bdf8a30 commit d6651c1

File tree

2 files changed

+12
-12
lines changed

2 files changed

+12
-12
lines changed

src/ptwt/conv_transform_3.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ def wavedec3(
117117
list: A list with the lll coefficients and dictionaries
118118
with the filter order strings::
119119
120-
("aad", "ada", "add", "daa", "dad", "dda", "ddd")
120+
("aad", "ada", "add", "daa", "dad", "dda", "ddd")
121121
122122
as keys. With a for the low pass or approximation filter and
123123
d for the high-pass or detail filter.

src/ptwt/wavelets_learnable.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -123,22 +123,22 @@ def perfect_reconstruction_loss(
123123
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
124124
"""Return the perfect reconstruction loss.
125125
126-
Strang 107: Assuming alias cancellation holds:
127-
P(z) = F(z)H(z)
128-
Product filter P(z) + P(-z) = 2.
129-
However, since alias cancellation is implemented as a soft constraint:
130-
P_0 + P_1 = 2
131-
Somehow NumPy and PyTorch implement convolution differently.
132-
For some reason, the machine learning people call cross-correlation
133-
convolution.
134-
https://discuss.pytorch.org/t/numpy-convolve-and-conv1d-in-pytorch/12172
135-
Therefore for true convolution, one element needs to be flipped.
136-
137126
Returns:
138127
list: The numerical value of the alias cancellation loss,
139128
as well as both intermediate values for analysis.
140129
141130
"""
131+
# Strang 107: Assuming alias cancellation holds:
132+
# P(z) = F(z)H(z)
133+
# Product filter P(z) + P(-z) = 2.
134+
# However, since alias cancellation is implemented as a soft constraint:
135+
# P_0 + P_1 = 2
136+
# Somehow NumPy and PyTorch implement convolution differently.
137+
# For some reason, the machine learning people call cross-correlation
138+
# convolution.
139+
# https://discuss.pytorch.org/t/numpy-convolve-and-conv1d-in-pytorch/12172
140+
# Therefore for true convolution, one element needs to be flipped.
141+
142142
dec_lo, dec_hi, rec_lo, rec_hi = self.filter_bank
143143
# polynomial multiplication is convolution, compute p(z):
144144
pad = dec_lo.shape[0] - 1

0 commit comments

Comments
 (0)