Skip to content

Commit ff16d32

Browse files
studyingeugenefracape
authored andcommitted
refactor: remove redundant Tensor allocation in GaussianConditional.update()
In GaussianConditional.update(), a temporary Tensor is allocated: quantized_cdf = torch.Tensor(len(pmf_length), max_length + 2) quantized_cdf = self._pmf_to_cdf(pmf, tail_mass, pmf_length, max_length) The first line is a dead store: the variable is immediately overwritten by the result of _pmf_to_cdf(). This removes the unnecessary allocation. - No functional changes - Slightly reduces heap traffic and avoids creating an uninitialized Tensor - Keeps dtype/device fully defined by _pmf_to_cdf()
1 parent 316a300 commit ff16d32

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

compressai/entropy_models/entropy_models.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -677,7 +677,6 @@ def update(self):
677677

678678
tail_mass = 2 * lower[:, :1]
679679

680-
quantized_cdf = torch.Tensor(len(pmf_length), max_length + 2)
681680
quantized_cdf = self._pmf_to_cdf(pmf, tail_mass, pmf_length, max_length)
682681
self._quantized_cdf = quantized_cdf
683682
self._offset = -pmf_center

0 commit comments

Comments
 (0)