@@ -4,42 +4,42 @@ TensorFlow Compression (TFC) contains data compression tools for TensorFlow.
4
4
5
5
You can use this library to build your own ML models with end-to-end optimized
6
6
data compression built in. It's useful to find storage-efficient representations
7
- of your data (images, features, examples, etc.) while only sacrificing a tiny
8
- fraction of model performance. It can compress any floating point tensor to a
9
- much smaller sequence of bits.
7
+ of your data (images, features, examples, etc.) while only sacrificing a small
8
+ fraction of model performance.
10
9
11
10
Specifically, the entropy model classes in this library simplify the process of
12
11
designing rate–distortion optimized codes. During training, they act like
13
12
likelihood models. Once training is completed, they encode floating point
14
- tensors into optimal bit sequences by automating the design of probability
13
+ tensors into optimized bit sequences by automating the design of probability
15
14
tables and calling a range coder implementation behind the scenes.
16
15
17
- Range coding (a.k.a. arithmetic coding) is exposed to TensorFlow models with a
18
- set of flexible TF ops written in C++. These include an optional "overflow"
16
+ The library implements range coding (a.k.a. arithmetic coding) using a set of
17
+ flexible TF ops written in C++. These include an optional "overflow"
19
18
functionality that embeds an Elias gamma code into the range encoded bit
20
19
sequence, making it possible to encode the entire set of signed integers rather
21
20
than just a finite range.
22
21
23
22
The main novelty of the learned approach over traditional transform coding is
24
23
the stochastic minimization of the rate-distortion Lagrangian, and using
25
24
nonlinear transforms implemented by neural networks. For an introduction to
26
- this, consider our [ paper on nonlinear transform
27
- coding] ( https://arxiv.org/abs/2007.03034 ) , or watch @jonycgn 's [ talk on learned
28
- image compression] ( https://www.youtube.com/watch?v=x_q7cZviXkY ) .
25
+ this from a data compression perspective, consider our [ paper on nonlinear
26
+ transform coding] ( https://arxiv.org/abs/2007.03034 ) , or watch @jonycgn 's [ talk
27
+ on learned image compression] ( https://www.youtube.com/watch?v=x_q7cZviXkY ) . For
28
+ an introduction to lossy data compression from a machine learning perspective,
29
+ take a look at @yiboyang 's [ review paper] ( https://arxiv.org/abs/2202.06533 ) .
29
30
30
31
## Documentation & getting help
31
32
33
+ Refer to [ the API
34
+ documentation] ( https://tensorflow.github.io/compression/docs/api_docs/python/tfc.html )
35
+ for a complete description of the classes and functions this package implements.
36
+
32
37
Please post all questions or comments on
33
- [ Discussions] ( https://github.com/tensorflow/compression/discussions ) or on the
34
- [ Google Group] ( https://groups.google.com/g/tensorflow-compression ) . Only file
38
+ [ Discussions] ( https://github.com/tensorflow/compression/discussions ) . Only file
35
39
[ Issues] ( https://github.com/tensorflow/compression/issues ) for actual bugs or
36
40
feature requests. On Discussions, you may get a faster answer, and you help
37
41
other people find the question or answer more easily later.
38
42
39
- Refer to [ the API
40
- documentation] ( https://tensorflow.github.io/compression/docs/api_docs/python/tfc.html )
41
- for a complete description of the classes and functions this package implements.
42
-
43
43
## Installation
44
44
45
45
*** Note: Precompiled packages are currently only provided for Linux and
0 commit comments