@@ -48,34 +48,40 @@ packages using the `pip` command. Refer to the
48
48
for more information on how to set up such a Python environment.
49
49
50
50
The current stable version of TFC (1.3) requires TensorFlow 1.15. The current
51
- beta release of TFC (2.0b1 ) is built for TensorFlow 2.3 . For versions compatible
51
+ beta release of TFC (2.0b2 ) is built for TensorFlow 2.4 . For versions compatible
52
52
with TensorFlow 1.14 or earlier, see our [ previous
53
- releases] ( https://github.com/tensorflow/compression/releases ) . The following
54
- instructions are for the stable release. To install the beta, replace the
55
- version numbers with 2.0b1 and 2.3, respectively.
53
+ releases] ( https://github.com/tensorflow/compression/releases ) .
54
+
55
+ ### pip
56
+
57
+ To install TF and TFC via ` pip ` , run the following command:
56
58
57
- You can
58
- install TensorFlow from any source. To install it via ` pip ` , run the following
59
- command:
60
- ``` bash
61
- pip install tensorflow-gpu==1.15
62
- ```
63
- for GPU support, or
64
59
``` bash
65
- pip install tensorflow==1.15
60
+ pip install tensorflow-gpu ==1.15 tensorflow-compression==1.3
66
61
```
67
- for CPU-only.
68
62
69
- Then, run the following command to install the tensorflow-compression pip
70
- package:
63
+ for the stable release, or
64
+
71
65
``` bash
72
- pip install tensorflow-compression==1.3
66
+ pip install tensorflow-gpu==2.4 tensorflow-probability==0.12.1 tensorflow-compression==2.0b2
73
67
```
74
68
69
+ for the beta release. If you don't need GPU support, you can drop the ` -gpu `
70
+ part.
71
+
75
72
To test that the installation works correctly, you can run the unit tests with
73
+ (respectively):
74
+
76
75
``` bash
77
76
python -m tensorflow_compression.python.all_test
78
77
```
78
+
79
+ or
80
+
81
+ ``` bash
82
+ python -m tensorflow_compression.all_tests
83
+ ```
84
+
79
85
Once the command finishes, you should see a message ``` OK (skipped=12) ``` or
80
86
similar in the last line.
81
87
@@ -86,11 +92,21 @@ To use a Docker container (e.g. on Windows), be sure to install Docker
86
92
use a [ TensorFlow Docker image] ( https://www.tensorflow.org/install/docker ) ,
87
93
and then run the ` pip install ` command inside the Docker container, not on the
88
94
host. For instance, you can use a command line like this:
95
+
89
96
``` bash
90
97
docker run tensorflow/tensorflow:1.15.0-py3 bash -c \
91
98
" pip install tensorflow-compression==1.3 &&
92
99
python -m tensorflow_compression.python.all_test"
93
100
```
101
+
102
+ or (for the beta version):
103
+
104
+ ``` bash
105
+ docker run tensorflow/tensorflow:2.4.0 bash -c \
106
+ " pip install tensorflow-probability==0.12.1 tensorflow-compression==2.0b2 &&
107
+ python -m tensorflow_compression.all_tests"
108
+ ```
109
+
94
110
This will fetch the TensorFlow Docker image if it's not already cached, install
95
111
the pip package and then run the unit tests to confirm that it works.
96
112
@@ -102,6 +118,7 @@ solve this, always install TensorFlow via `pip` rather than `conda`. For
102
118
example, this creates an Anaconda environment with Python 3.6 and CUDA
103
119
libraries, and then installs TensorFlow and tensorflow-compression with GPU
104
120
support:
121
+
105
122
``` bash
106
123
conda create --name ENV_NAME python=3.6 cudatoolkit=10.0 cudnn
107
124
conda activate ENV_NAME
@@ -122,20 +139,25 @@ import tensorflow_compression as tfc
122
139
In the
123
140
[ models directory] ( https://github.com/tensorflow/compression/tree/master/models ) ,
124
141
you'll find a python script ` tfci.py ` . Download the file and run:
142
+
125
143
``` bash
126
144
python tfci.py -h
127
145
```
128
146
129
147
This will give you a list of options. Briefly, the command
148
+
130
149
``` bash
131
150
python tfci.py compress < model> < PNG file>
132
151
```
152
+
133
153
will compress an image using a pre-trained model and write a file ending in
134
154
` .tfci ` . Execute ` python tfci.py models ` to give you a list of supported
135
155
pre-trained models. The command
156
+
136
157
``` bash
137
158
python tfci.py decompress < TFCI file>
138
159
```
160
+
139
161
will decompress a TFCI file and write a PNG file. By default, an output file
140
162
will be named like the input file, only with the appropriate file extension
141
163
appended (any existing extensions will not be removed).
@@ -151,13 +173,15 @@ contains an implementation of the image compression model described in:
151
173
> https://arxiv.org/abs/1611.01704
152
174
153
175
To see a list of options, download the file ` bls2017.py ` and run:
176
+
154
177
``` bash
155
178
python bls2017.py -h
156
179
```
157
180
158
181
To train the model, you need to supply it with a dataset of RGB training images.
159
182
They should be provided in PNG format. Training can be as simple as the
160
183
following command:
184
+
161
185
``` bash
162
186
python bls2017.py --verbose train --train_glob=" images/*.png"
163
187
```
@@ -177,13 +201,15 @@ enough (or larger). This is described in more detail in:
177
201
If you wish, you can monitor progress with Tensorboard. To do this, create a
178
202
Tensorboard instance in the background before starting the training, then point
179
203
your web browser to [ port 6006 on your machine] ( http://localhost:6006 ) :
204
+
180
205
``` bash
181
206
tensorboard --logdir=. &
182
207
```
183
208
184
209
When training has finished, the Python script can be used to compress and
185
210
decompress images as follows. The same model checkpoint must be accessible to
186
211
both commands.
212
+
187
213
``` bash
188
214
python bls2017.py [options] compress original.png compressed.tfci
189
215
python bls2017.py [options] decompress compressed.tfci reconstruction.png
@@ -209,6 +235,7 @@ Inside a Docker container from the image, the following steps need to be taken.
209
235
2 . Run ` :build_pip_pkg ` inside the cloned repo.
210
236
211
237
For example:
238
+
212
239
``` bash
213
240
sudo docker run -v /tmp/tensorflow_compression:/tmp/tensorflow_compression \
214
241
tensorflow/tensorflow:nightly-custom-op-ubuntu16 bash -c \
@@ -222,6 +249,7 @@ The wheel file is created inside `/tmp/tensorflow_compression`. Optimization
222
249
flags can be passed via ` --copt ` to the ` bazel run ` command above.
223
250
224
251
To test the created package, first install the resulting wheel file:
252
+
225
253
``` bash
226
254
pip install /tmp/tensorflow_compression/tensorflow_compression-* .whl
227
255
```
@@ -230,13 +258,15 @@ Then run the unit tests (Do not run the tests in the workspace directory where
230
258
` WORKSPACE ` of ` tensorflow_compression ` repo lives. In that case, the Python
231
259
interpreter would attempt to import ` tensorflow_compression ` packages from the
232
260
source tree, rather than from the installed package system directory):
261
+
233
262
``` bash
234
263
pushd /tmp
235
264
python -m tensorflow_compression.all_tests
236
265
popd
237
266
```
238
267
239
268
When done, you can uninstall the pip package again:
269
+
240
270
``` bash
241
271
pip uninstall tensorflow-compression
242
272
```
0 commit comments