Skip to content

Commit 09ec89c

Browse files
mentzercopybara-github
authored andcommitted
Move examples/ to models/.
Add pre-release version of code for the paper "High-Fidelity Generative Image Compression". PiperOrigin-RevId: 317378744 Change-Id: Id4ebf79f8b87da2fa81c24c5d9c237e106c42fa2
1 parent d42203e commit 09ec89c

File tree

15 files changed

+2352
-5
lines changed

15 files changed

+2352
-5
lines changed

BUILD

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,21 +54,21 @@ py_binary(
5454

5555
py_binary(
5656
name = "tfci",
57-
srcs = ["examples/tfci.py"],
57+
srcs = ["models/tfci.py"],
5858
python_version = "PY3",
5959
deps = [":tensorflow_compression"],
6060
)
6161

6262
py_binary(
6363
name = "bls2017",
64-
srcs = ["examples/bls2017.py"],
64+
srcs = ["models/bls2017.py"],
6565
python_version = "PY3",
6666
deps = [":tensorflow_compression"],
6767
)
6868

6969
py_binary(
7070
name = "bmshj2018",
71-
srcs = ["examples/bmshj2018.py"],
71+
srcs = ["models/bmshj2018.py"],
7272
python_version = "PY3",
7373
deps = [":tensorflow_compression"],
7474
)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ import tensorflow_compression as tfc
119119
### Using a pre-trained model to compress an image
120120

121121
In the
122-
[examples directory](https://github.com/tensorflow/compression/tree/master/examples),
122+
[models directory](https://github.com/tensorflow/compression/tree/master/models),
123123
you'll find a python script `tfci.py`. Download the file and run:
124124
```bash
125125
python tfci.py -h
@@ -142,7 +142,7 @@ appended (any existing extensions will not be removed).
142142
### Training your own model
143143

144144
The
145-
[examples directory](https://github.com/tensorflow/compression/tree/master/examples)
145+
[models directory](https://github.com/tensorflow/compression/tree/master/models)
146146
contains an implementation of the image compression model described in:
147147

148148
> "End-to-end optimized image compression"<br />
File renamed without changes.
File renamed without changes.

models/hific/README.md

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# High-Fidelity Generative Image Compression
2+
3+
<div align="center">
4+
<a href='https://hific.github.io'>
5+
<img src='https://hific.github.io/social/thumb.jpg' width="80%"/>
6+
</a>
7+
</div>
8+
9+
## [[Demo]](https://hific.github.io) [[Paper]](https://arxiv.org/abs/2006.09965) [[Colab]](https://colab.research.google.com/github/tensorflow/compression/blob/master/models/hific/colab.ipynb)
10+
11+
12+
## Abstract
13+
14+
We extensively study how to combine Generative Adversarial Networks and learned
15+
compression to obtain a state-of-the-art generative lossy compression system. In
16+
particular, we investigate normalization layers, generator and discriminator
17+
architectures, training strategies, as well as perceptual losses. In contrast to
18+
previous work, i) we obtain visually pleasing reconstructions that are
19+
perceptually similar to the input, ii) we operate in a broad range of bitrates,
20+
and iii) our approach can be applied to high-resolution images. We bridge the
21+
gap between rate-distortion-perception theory and practice by evaluating our
22+
approach both quantitatively with various perceptual metrics and a user study.
23+
The study shows that our method is preferred to previous approaches even if they
24+
use more than 2&times; the bitrate.
25+
26+
## Try it out!
27+
28+
[<img src="https://colab.research.google.com/assets/colab-badge.svg" align="center">](https://colab.research.google.com/github/tensorflow/compression/blob/master/models/hific/colab.ipynb)
29+
30+
We show some images on the [demo page](https://hific.github.io) and we
31+
release a
32+
[colab](https://colab.research.google.com/github/tensorflow/compression/blob/master/models/hific/colab.ipynb)
33+
update for interactively using our models on your own images.
34+
35+
## Using the code
36+
37+
In addition to `tensorflow_compression`, you need to install [`compare_gan`](https://github.com/google/compare_gan)
38+
and TensorFlow 1.15:
39+
40+
```bash
41+
pip install -r requirements.txt
42+
```
43+
44+
## Running our models locally
45+
46+
Use `tfci.py` for locally running our models to encode and decode images:
47+
48+
```python
49+
python tfci.py compress <model> <PNG file>
50+
```
51+
52+
where `model` can be one of `"hific-lo", "hific-mi", "hific-hi"`.
53+
54+
## Code
55+
56+
The architecture is defined in `arch.py` , which is used to build the model in
57+
`model.py`. Our configurations are in `configs.py`.
58+
59+
### Training your own models.
60+
61+
We release a _simplified_ trainer in `train.py` as a starting point for custom
62+
training. Note that it's using [LSUN]() from [tfds]() which likely needs to be
63+
adapted to a bigger dataset to obtain state-of-the-art results (see below).
64+
65+
For the paper, we initialize our GAN models from a MSE+LPIPS checkpoint. To
66+
replicate this, first train a model for MSE + LPIPS only, and then use that as a
67+
starting point:
68+
69+
```bash
70+
# First train a model for MSE+LPIPS:
71+
python train.py --config mselpips --ckpt_dir ckpts --num_steps 1M
72+
73+
# Once that finishes, train a GAN model:
74+
python train.py --config hific --ckpt_dir ckpts \
75+
--init_from ckpts/mselpips --num_steps 1M
76+
```
77+
78+
To test a trained model, use `eval.py`:
79+
80+
```bash
81+
python eval.py --config hific --ckpt_dir ckpts/hific
82+
```
83+
84+
#### Adapting the dataset
85+
86+
You can change to any other TFDS dataset by changing the `tfds_name` flag for
87+
`build_input`. To train on a custom dataset, you can replace the `_get_dataset`
88+
call in `train.py`.
89+
90+
## Citation
91+
92+
If you use the work released here for your research, please cite this paper:
93+
94+
```
95+
@inproceedings{mentzer2020hific,
96+
title={High-Fidelity Generative Image Compression},
97+
author={Fabian Mentzer and George Toderici and Michael Tschannen and Eirikur Agustsson},
98+
year={2020}
99+
}
100+
```
101+

0 commit comments

Comments
 (0)