Skip to content

Commit 2acdf14

Browse files
author
Johannes Ballé
committed
Documentation tweaks
1 parent 7c402f0 commit 2acdf14

File tree

12 files changed

+113
-102
lines changed

12 files changed

+113
-102
lines changed

README.md

Lines changed: 37 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,44 +1,45 @@
1-
This package contains data compression ops and layers for TensorFlow.
2-
3-
# Documentation
4-
5-
All documentation is hosted at https://tensorflow.github.io/compression.
6-
7-
Refer to [the API documentation](https://tensorflow.github.io/compression/docs/api_docs/python/tfc.html)
8-
for a complete description of the Keras layers and TensorFlow ops this package
9-
implements.
10-
11-
There's also an introduction to our `EntropyBottleneck` class
12-
[here](https://tensorflow.github.io/compression/docs/entropy_bottleneck.html),
13-
and a description of the range coding operators
14-
[here](https://tensorflow.github.io/compression/docs/range_coding.html).
15-
16-
# Google group
17-
18-
For usage questions and discussions, please head over to our
19-
[Google group](https://groups.google.com/forum/#!forum/tensorflow-compression).
1+
This project contains data compression ops and layers for TensorFlow. The
2+
project website is at https://tensorflow.github.io/compression.
203

214
# Quick start
225

236
**Please note**: You need TensorFlow 1.9 (or the master branch as of May 2018)
247
or later installed.
258

9+
Clone the repository to a filesystem location of your choice, or download the
10+
ZIP file and unpack it. Then include the root directory in your `PYTHONPATH`
11+
environment variable:
12+
13+
```bash
14+
cd <target directory>
15+
git clone https://github.com/tensorflow/compression.git tensorflow_compression
16+
export PYTHONPATH="$PWD/tensorflow_compression:$PYTHONPATH"
17+
```
18+
2619
To make sure the library imports succeed, try running the unit tests:
2720

2821
```bash
22+
cd tensorflow_compression
2923
for i in tensorflow_compression/python/*/*_test.py; do
3024
python $i
3125
done
3226
```
3327

28+
We recommend importing the library from your Python code as follows:
29+
30+
```python
31+
import tensorflow as tf
32+
import tensorflow_compression as tfc
33+
```
34+
3435
## Example model
3536

3637
The [examples directory](https://github.com/tensorflow/compression/tree/master/examples)
3738
directory contains an implementation of the image compression model described
3839
in:
3940

40-
> J. Ballé, V. Laparra, E. P. Simoncelli:
41-
> "End-to-end optimized image compression"
41+
> "End-to-end optimized image compression"<br />
42+
> J. Ballé, V. Laparra, E. P. Simoncelli<br />
4243
> https://arxiv.org/abs/1611.01704
4344
4445
To see a list of options, change to the directory and run:
@@ -49,14 +50,28 @@ python bls2017.py -h
4950

5051
To train the model, you need to supply it with a dataset of RGB training images.
5152
They should be provided in PNG format and must all have the same shape.
52-
Following training, the python script can be used to compress and decompress
53+
Following training, the Python script can be used to compress and decompress
5354
images as follows:
5455

5556
```bash
5657
python bls2017.py [options] compress original.png compressed.bin
5758
python bls2017.py [options] decompress compressed.bin reconstruction.png
5859
```
5960

61+
# Help & documentation
62+
63+
For usage questions and discussions, please head over to our
64+
[Google group](https://groups.google.com/forum/#!forum/tensorflow-compression).
65+
66+
Refer to [the API documentation](https://tensorflow.github.io/compression/docs/api_docs/python/tfc.html)
67+
for a complete description of the Keras layers and TensorFlow ops this package
68+
implements.
69+
70+
There's also an introduction to our `EntropyBottleneck` class
71+
[here](https://tensorflow.github.io/compression/docs/entropy_bottleneck.html),
72+
and a description of the range coding operators
73+
[here](https://tensorflow.github.io/compression/docs/range_coding.html).
74+
6075
# Authors
6176
Johannes Ballé (github: [jonycgn](https://github.com/jonycgn)),
6277
Sung Jin Hwang (github: [ssjhv](https://github.com/ssjhv)), and

docs/api_docs/python/tfc/EntropyBottleneck.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -76,11 +76,9 @@ The layer implements a flexible probability density model to estimate entropy
7676
of its input tensor, which is described in the appendix of the paper (please
7777
cite the paper if you use this code for scientific work):
7878

79-
"Variational image compression with a scale hyperprior"
80-
81-
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
82-
83-
https://arxiv.org/abs/1802.01436
79+
> "Variational image compression with a scale hyperprior"<br />
80+
> J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston<br />
81+
> https://arxiv.org/abs/1802.01436
8482
8583
The layer assumes that the input tensor is at least 2D, with a batch dimension
8684
at the beginning and a channel dimension as specified by `data_format`. The

docs/api_docs/python/tfc/GDN.md

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -58,18 +58,14 @@ Generalized divisive normalization layer.
5858

5959
Based on the papers:
6060

61-
"Density Modeling of Images using a Generalized Normalization
62-
Transformation"
63-
64-
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
65-
66-
https://arxiv.org/abs/1511.06281
67-
68-
"End-to-end Optimized Image Compression"
69-
70-
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
71-
72-
https://arxiv.org/abs/1611.01704
61+
> "Density modeling of images using a generalized normalization
62+
> transformation"<br />
63+
> J. Ballé, V. Laparra, E.P. Simoncelli<br />
64+
> https://arxiv.org/abs/1511.06281
65+
66+
> "End-to-end optimized image compression"<br />
67+
> J. Ballé, V. Laparra, E.P. Simoncelli<br />
68+
> https://arxiv.org/abs/1611.01704
7369
7470
Implements an activation function that is essentially a multivariate
7571
generalization of a particular sigmoid-type function:

docs/api_docs/python/tfc/SignalConv1D.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -88,15 +88,17 @@ In general, the outputs are equivalent to a composition of:
8888
5. a pointwise nonlinearity (if `activation is not None`)
8989

9090
For more information on what the difference between convolution and cross
91-
correlation is, see https://en.wikipedia.org/wiki/Convolution and
92-
https://en.wikipedia.org/wiki/Cross-correlation. Note that the distinction
93-
between convolution and cross correlation is occasionally blurred (one may use
94-
convolution as an umbrella term for both). For a discussion of
95-
up-/downsampling, see https://en.wikipedia.org/wiki/Upsampling and
96-
https://en.wikipedia.org/wiki/Decimation_(signal_processing). A more in-depth
97-
treatment of all of these operations can be found in:
98-
99-
Oppenheim, Schafer, Buck: Discrete-Time Signal Processing (Prentice Hall)
91+
correlation is, see [this](https://en.wikipedia.org/wiki/Convolution) and
92+
[this](https://en.wikipedia.org/wiki/Cross-correlation) Wikipedia article,
93+
respectively. Note that the distinction between convolution and cross
94+
correlation is occasionally blurred (one may use convolution as an umbrella
95+
term for both). For a discussion of up-/downsampling, refer to the articles
96+
about [upsampling](https://en.wikipedia.org/wiki/Upsampling) and
97+
[decimation](https://en.wikipedia.org/wiki/Decimation_(signal_processing)). A
98+
more in-depth treatment of all of these operations can be found in:
99+
100+
> "Discrete-Time Signal Processing"<br />
101+
> Oppenheim, Schafer, Buck (Prentice Hall)
100102
101103
For purposes of this class, the center position of a kernel is always
102104
considered to be at `K // 2`, where `K` is the support length of the kernel.

docs/api_docs/python/tfc/SignalConv2D.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -88,15 +88,17 @@ In general, the outputs are equivalent to a composition of:
8888
5. a pointwise nonlinearity (if `activation is not None`)
8989

9090
For more information on what the difference between convolution and cross
91-
correlation is, see https://en.wikipedia.org/wiki/Convolution and
92-
https://en.wikipedia.org/wiki/Cross-correlation. Note that the distinction
93-
between convolution and cross correlation is occasionally blurred (one may use
94-
convolution as an umbrella term for both). For a discussion of
95-
up-/downsampling, see https://en.wikipedia.org/wiki/Upsampling and
96-
https://en.wikipedia.org/wiki/Decimation_(signal_processing). A more in-depth
97-
treatment of all of these operations can be found in:
98-
99-
Oppenheim, Schafer, Buck: Discrete-Time Signal Processing (Prentice Hall)
91+
correlation is, see [this](https://en.wikipedia.org/wiki/Convolution) and
92+
[this](https://en.wikipedia.org/wiki/Cross-correlation) Wikipedia article,
93+
respectively. Note that the distinction between convolution and cross
94+
correlation is occasionally blurred (one may use convolution as an umbrella
95+
term for both). For a discussion of up-/downsampling, refer to the articles
96+
about [upsampling](https://en.wikipedia.org/wiki/Upsampling) and
97+
[decimation](https://en.wikipedia.org/wiki/Decimation_(signal_processing)). A
98+
more in-depth treatment of all of these operations can be found in:
99+
100+
> "Discrete-Time Signal Processing"<br />
101+
> Oppenheim, Schafer, Buck (Prentice Hall)
100102
101103
For purposes of this class, the center position of a kernel is always
102104
considered to be at `K // 2`, where `K` is the support length of the kernel.

docs/api_docs/python/tfc/SignalConv3D.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -88,15 +88,17 @@ In general, the outputs are equivalent to a composition of:
8888
5. a pointwise nonlinearity (if `activation is not None`)
8989

9090
For more information on what the difference between convolution and cross
91-
correlation is, see https://en.wikipedia.org/wiki/Convolution and
92-
https://en.wikipedia.org/wiki/Cross-correlation. Note that the distinction
93-
between convolution and cross correlation is occasionally blurred (one may use
94-
convolution as an umbrella term for both). For a discussion of
95-
up-/downsampling, see https://en.wikipedia.org/wiki/Upsampling and
96-
https://en.wikipedia.org/wiki/Decimation_(signal_processing). A more in-depth
97-
treatment of all of these operations can be found in:
98-
99-
Oppenheim, Schafer, Buck: Discrete-Time Signal Processing (Prentice Hall)
91+
correlation is, see [this](https://en.wikipedia.org/wiki/Convolution) and
92+
[this](https://en.wikipedia.org/wiki/Cross-correlation) Wikipedia article,
93+
respectively. Note that the distinction between convolution and cross
94+
correlation is occasionally blurred (one may use convolution as an umbrella
95+
term for both). For a discussion of up-/downsampling, refer to the articles
96+
about [upsampling](https://en.wikipedia.org/wiki/Upsampling) and
97+
[decimation](https://en.wikipedia.org/wiki/Decimation_(signal_processing)). A
98+
more in-depth treatment of all of these operations can be found in:
99+
100+
> "Discrete-Time Signal Processing"<br />
101+
> Oppenheim, Schafer, Buck (Prentice Hall)
100102
101103
For purposes of this class, the center position of a kernel is always
102104
considered to be at `K // 2`, where `K` is the support length of the kernel.

docs/api_docs/python/tfc/same_padding_for_kernel.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ tfc.same_padding_for_kernel(
1414

1515
Determine correct amount of padding for `same` convolution.
1616

17-
To implement `same` convolutions, we first pad the image, and then perform a
18-
`valid` convolution or correlation. Given the kernel shape, this function
17+
To implement `'same'` convolutions, we first pad the image, and then perform a
18+
`'valid'` convolution or correlation. Given the kernel shape, this function
1919
determines the correct amount of padding so that the output of the convolution
2020
or correlation is the same size as the pre-padded input.
2121

@@ -24,8 +24,8 @@ or correlation is the same size as the pre-padded input.
2424
* <b>`shape`</b>: Shape of the convolution kernel (without the channel dimensions).
2525
* <b>`corr`</b>: Boolean. If `True`, assume cross correlation, if `False`, convolution.
2626
* <b>`strides_up`</b>: If this is used for an upsampled convolution, specify the
27-
strides here. (For downsampled convolutions, specify (1, 1): in that case,
28-
the strides don't matter.)
27+
strides here. (For downsampled convolutions, specify `(1, 1)`: in that
28+
case, the strides don't matter.)
2929

3030

3131
#### Returns:

docs/entropy_bottleneck.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ The layer implements a flexible probability density model to estimate entropy,
1919
which is described in the appendix of the paper (please cite the paper if you
2020
use this code for scientific work):
2121

22-
> J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston:
23-
> "Variational image compression with a scale hyperprior"
22+
> "Variational image compression with a scale hyperprior"<br />
23+
> J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston<br />
2424
> https://arxiv.org/abs/1802.01436
2525
2626
The layer assumes that the input tensor is at least 2D, with a batch dimension

tensorflow_compression/python/layers/entropy_models.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,11 +57,9 @@ class EntropyBottleneck(base_layer.Layer):
5757
of its input tensor, which is described in the appendix of the paper (please
5858
cite the paper if you use this code for scientific work):
5959
60-
"Variational image compression with a scale hyperprior"
61-
62-
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
63-
64-
https://arxiv.org/abs/1802.01436
60+
> "Variational image compression with a scale hyperprior"<br />
61+
> J. Ballé, D. Minnen, S. Singh, S. J. Hwang, N. Johnston<br />
62+
> https://arxiv.org/abs/1802.01436
6563
6664
The layer assumes that the input tensor is at least 2D, with a batch dimension
6765
at the beginning and a channel dimension as specified by `data_format`. The

tensorflow_compression/python/layers/gdn.py

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -43,18 +43,14 @@ class GDN(base.Layer):
4343
4444
Based on the papers:
4545
46-
"Density Modeling of Images using a Generalized Normalization
47-
Transformation"
48-
49-
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
50-
51-
https://arxiv.org/abs/1511.06281
52-
53-
"End-to-end Optimized Image Compression"
54-
55-
Johannes Ballé, Valero Laparra, Eero P. Simoncelli
56-
57-
https://arxiv.org/abs/1611.01704
46+
> "Density modeling of images using a generalized normalization
47+
> transformation"<br />
48+
> J. Ballé, V. Laparra, E.P. Simoncelli<br />
49+
> https://arxiv.org/abs/1511.06281
50+
51+
> "End-to-end optimized image compression"<br />
52+
> J. Ballé, V. Laparra, E.P. Simoncelli<br />
53+
> https://arxiv.org/abs/1611.01704
5854
5955
Implements an activation function that is essentially a multivariate
6056
generalization of a particular sigmoid-type function:

0 commit comments

Comments
 (0)