Skip to content

Commit a7998c5

Browse files
Johannes Ballécopybara-github
authored andcommitted
Documentation updates for 2.7.0 release.
PiperOrigin-RevId: 424350717 Change-Id: I8616f82109e322e984e0c2b270b50fd2b868ddbe
1 parent 59d9f18 commit a7998c5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+6582
-2266
lines changed

README.md

Lines changed: 31 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,18 @@ likelihood models. Once training is completed, they encode floating point
1414
tensors into optimal bit sequences by automating the design of probability
1515
tables and calling a range coder implementation behind the scenes.
1616

17-
The main novelty of this method over traditional transform coding is the
18-
stochastic minimization of the rate-distortion Lagrangian, and using nonlinear
19-
transforms implemented by neural networks. For an introduction to this, consider
20-
our [paper on nonlinear transform coding](https://arxiv.org/abs/2007.03034), or
21-
watch @jonycgn's [talk on learned image
22-
compression](https://www.youtube.com/watch?v=x_q7cZviXkY).
17+
Range coding (a.k.a. arithmetic coding) is exposed to TensorFlow models with a
18+
set of flexible TF ops written in C++. These include an optional "overflow"
19+
functionality that embeds an Elias gamma code into the range encoded bit
20+
sequence, making it possible to encode the entire set of signed integers rather
21+
than just a finite range.
22+
23+
The main novelty of the learned approach over traditional transform coding is
24+
the stochastic minimization of the rate-distortion Lagrangian, and using
25+
nonlinear transforms implemented by neural networks. For an introduction to
26+
this, consider our [paper on nonlinear transform
27+
coding](https://arxiv.org/abs/2007.03034), or watch @jonycgn's [talk on learned
28+
image compression](https://www.youtube.com/watch?v=x_q7cZviXkY).
2329

2430
## Documentation & getting help
2531

@@ -37,8 +43,8 @@ for a complete description of the classes and functions this package implements.
3743
## Installation
3844

3945
***Note: Precompiled packages are currently only provided for Linux and
40-
Darwin/Mac OS and Python 3.6-3.9. To use these packages on Windows, consider
41-
using a [TensorFlow Docker image](https://www.tensorflow.org/install/docker) and
46+
Darwin/Mac OS. To use these packages on Windows, consider using a
47+
[TensorFlow Docker image](https://www.tensorflow.org/install/docker) and
4248
installing TensorFlow Compression using pip inside the Docker container.***
4349

4450
Set up an environment in which you can install precompiled binary Python
@@ -67,6 +73,15 @@ python -m tensorflow_compression.all_tests
6773
Once the command finishes, you should see a message ```OK (skipped=29)``` or
6874
similar in the last line.
6975

76+
### Colab
77+
78+
To try out TFC live in a [Colab](https://colab.research.google.com/), run the
79+
following command in a cell before executing your Python code:
80+
81+
```
82+
!pip install tensorflow-compression
83+
```
84+
7085
### Docker
7186

7287
To use a Docker container (e.g. on Windows), be sure to install Docker
@@ -76,7 +91,7 @@ and then run the `pip install` command inside the Docker container, not on the
7691
host. For instance, you can use a command line like this:
7792

7893
```bash
79-
docker run tensorflow/tensorflow:2.5.0 bash -c \
94+
docker run tensorflow/tensorflow:latest bash -c \
8095
"pip install tensorflow-compression &&
8196
python -m tensorflow_compression.all_tests"
8297
```
@@ -144,8 +159,12 @@ appended (any existing extensions will not be removed).
144159
The
145160
[models directory](https://github.com/tensorflow/compression/tree/master/models)
146161
contains several implementations of published image compression models to enable
147-
easy experimentation. The instructions below talk about a re-implementation of
148-
the model published in:
162+
easy experimentation. Note that in order to reproduce published results, more
163+
tuning of the code and training dataset may be necessary. Use the `tfci.py`
164+
script above to access published models.
165+
166+
The following instructions talk about a re-implementation of the model published
167+
in:
149168

150169
> "End-to-end optimized image compression"<br />
151170
> J. Ballé, V. Laparra, E. P. Simoncelli<br />
@@ -207,7 +226,7 @@ This section describes the necessary steps to build your own pip packages of
207226
TensorFlow Compression. This may be necessary to install it on platforms for
208227
which we don't provide precompiled binaries (currently only Linux and Darwin).
209228

210-
We use the custom-op Docker images (e.g.
229+
You can use the custom-op Docker images (e.g.
211230
`tensorflow/tensorflow:nightly-custom-op-ubuntu16`) for building pip packages
212231
for Linux. Note that this is different from `tensorflow/tensorflow:devel`. To be
213232
compatible with the TensorFlow pip package, the GCC version must match, but

docs/api_docs/python/tfc.md

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,6 @@ Data compression in TensorFlow.
6868

6969
[`class RDFTParameter`](./tfc/RDFTParameter.md): RDFT reparameterization of a convolution kernel.
7070

71-
[`class Round`](./tfc/Round.md): Applies rounding.
72-
7371
[`class RoundAdapter`](./tfc/RoundAdapter.md): Continuous density function + round.
7472

7573
[`class SignalConv1D`](./tfc/SignalConv1D.md): 1D convolution layer.
@@ -94,6 +92,22 @@ Data compression in TensorFlow.
9492

9593
## Functions
9694

95+
[`create_range_decoder(...)`](./tfc/create_range_decoder.md): Creates range decoder objects to be used by `EntropyDecode*` ops.
96+
97+
[`create_range_encoder(...)`](./tfc/create_range_encoder.md): Creates range encoder objects to be used by `EntropyEncode*` ops.
98+
99+
[`entropy_decode_channel(...)`](./tfc/entropy_decode_channel.md): Decodes the encoded stream inside `handle`.
100+
101+
[`entropy_decode_finalize(...)`](./tfc/entropy_decode_finalize.md): Finalizes the decoding process. This op performs a *weak* sanity check, and the
102+
103+
[`entropy_decode_index(...)`](./tfc/entropy_decode_index.md): Decodes the encoded stream inside `handle`.
104+
105+
[`entropy_encode_channel(...)`](./tfc/entropy_encode_channel.md): Encodes each input in `value`.
106+
107+
[`entropy_encode_finalize(...)`](./tfc/entropy_encode_finalize.md): Finalizes the encoding process and extracts byte stream from the encoder.
108+
109+
[`entropy_encode_index(...)`](./tfc/entropy_encode_index.md): Encodes each input in `value` according to a distribution selected by `index`.
110+
97111
[`estimate_tails(...)`](./tfc/estimate_tails.md): Estimates approximate tail quantiles.
98112

99113
[`lower_bound(...)`](./tfc/lower_bound.md): Same as `tf.maximum`, but with helpful gradient for `inputs < bound`.
@@ -106,9 +120,7 @@ Data compression in TensorFlow.
106120

107121
[`quantization_offset(...)`](./tfc/quantization_offset.md): Computes distribution-dependent quantization offset.
108122

109-
[`range_decode(...)`](./tfc/range_decode.md): Range-decodes `code` into an int32 tensor of shape `shape`.
110-
111-
[`range_encode(...)`](./tfc/range_encode.md): Range encodes integer `data` with a finite alphabet.
123+
[`round_st(...)`](./tfc/round_st.md): Straight-through round with optional quantization offset.
112124

113125
[`same_padding_for_kernel(...)`](./tfc/same_padding_for_kernel.md): Determine correct amount of padding for `same` convolution.
114126

@@ -118,10 +130,6 @@ Data compression in TensorFlow.
118130

119131
[`soft_round_inverse(...)`](./tfc/soft_round_inverse.md): Inverse of soft_round().
120132

121-
[`unbounded_index_range_decode(...)`](./tfc/unbounded_index_range_decode.md): Range decodes `encoded` using an indexed probability table.
122-
123-
[`unbounded_index_range_encode(...)`](./tfc/unbounded_index_range_encode.md): Range encodes unbounded integer `data` using an indexed probability table.
124-
125133
[`upper_bound(...)`](./tfc/upper_bound.md): Same as `tf.minimum`, but with helpful gradient for `inputs > bound`.
126134

127135
[`upper_tail(...)`](./tfc/upper_tail.md): Approximates upper tail quantile for range coding.

0 commit comments

Comments
 (0)