@@ -188,7 +188,7 @@ building.
188
188
189
189
For compilation optimization flags, the default (` -march=native ` ) optimizes the
190
190
generated code for your machine's CPU type. However, if building TensorFlow for
191
- a different CPU type, consider a more specific optimization flag. See the
191
+ a different CPU type, consider a more specific optimization flag. Check the
192
192
[ GCC manual] ( https://gcc.gnu.org/onlinedocs/gcc-4.5.3/gcc/i386-and-x86_002d64-Options.html ) {:.external}
193
193
for examples.
194
194
@@ -240,7 +240,8 @@ bazel build --config=v1 [--config=option] //tensorflow/tools/pip_package:build_p
240
240
241
241
### Bazel build options
242
242
243
- See the Bazel [ command-line reference] ( https://bazel.build/reference/command-line-reference )
243
+ Refer to the Bazel
244
+ [ command-line reference] ( https://bazel.build/reference/command-line-reference )
244
245
for
245
246
[ build options] ( https://bazel.build/reference/command-line-reference#build-options ) .
246
247
@@ -293,17 +294,17 @@ Success: TensorFlow is now installed.
293
294
294
295
TensorFlow's Docker development images are an easy way to set up an environment
295
296
to build Linux packages from source. These images already contain the source
296
- code and dependencies required to build TensorFlow. See the TensorFlow
297
- [ Docker guide] ( ./docker.md ) for installation and the
297
+ code and dependencies required to build TensorFlow. Go to the TensorFlow
298
+ [ Docker guide] ( ./docker.md ) for installation instructions and the
298
299
[ list of available image tags] ( https://hub.docker.com/r/tensorflow/tensorflow/tags/ ) {:.external}.
299
300
300
301
### CPU-only
301
302
302
303
The following example uses the ` :devel ` image to build a CPU-only package from
303
- the latest TensorFlow source code. See the [ Docker guide] ( ./docker.md ) for
304
+ the latest TensorFlow source code. Check the [ Docker guide] ( ./docker.md ) for
304
305
available TensorFlow ` -devel ` tags.
305
306
306
- Download the latest development image and start a Docker container that we 'll
307
+ Download the latest development image and start a Docker container that you 'll
307
308
use to build the * pip* package:
308
309
309
310
<pre class =" prettyprint lang-bsh " >
@@ -368,7 +369,7 @@ On your host machine, the TensorFlow *pip* package is in the current directory
368
369
Docker is the easiest way to build GPU support for TensorFlow since the * host*
369
370
machine only requires the
370
371
[ NVIDIA®  ; driver] ( https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#how-do-i-install-the-nvidia-driver ) {:.external}
371
- (the * NVIDIA® CUDA® Toolkit* doesn't have to be installed). See the
372
+ (the * NVIDIA® CUDA® Toolkit* doesn't have to be installed). Refer to the
372
373
[ GPU support guide] ( ./gpu.md ) and the TensorFlow [ Docker guide] ( ./docker.md ) to
373
374
set up [ nvidia-docker] ( https://github.com/NVIDIA/nvidia-docker ) {:.external}
374
375
(Linux only).
0 commit comments