Clarification on Variable Bitrate Compression Implementation #347
Unanswered
teaandscones
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Thank you for the excellent work!
I’m relatively new to the library and I was wondering if you could provide some guidance on how to adapt the library to perform what is described in the section “Algorithms for variable bitrate compression with a single set of NN weights” of your paper: https://arxiv.org/pdf/2402.18930.
From equations (3) and (4), it seems that scaling the latent tensor before and after the rounding operations might be sufficient.

However, this is not entirely clear to me. Shouldn’t the entropy model also be adjusted accordingly?
Looking at where the rounding operation occurs in the library,
CompressAI/compressai/entropy_models/entropy_models.py
Line 169 in 4fbc02f
I noticed that the means are subtracted beforehand.
CompressAI/compressai/entropy_models/entropy_models.py
Line 167 in 4fbc02f
Therefore, scaling the latent tensor before rounding would also scale the means.
But these means are also passed to the entropy model when computing the bitstreams. For example, they are first retrieved via _get_medians() and then passed to the compress method of the entropy model.
CompressAI/compressai/entropy_models/entropy_models.py
Lines 556 to 562 in 4fbc02f
Could you please clarify whether the entropy model needs to be modified as well, or if scaling alone is sufficient?
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions