Skip to content

Commit e382a7f

Browse files
committed
fix whitespace tailing issue
1 parent d645db9 commit e382a7f

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

tensorlayer/cli/train.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,25 +7,25 @@
77
(Alpha release - usage might change later)
88
99
The tensorlayer.cli.train module provides the ``tl train`` subcommand.
10-
It helps the user bootstrap a TensorFlow/TensorLayer program for distributed training
10+
It helps the user bootstrap a TensorFlow/TensorLayer program for distributed training
1111
using multiple GPU cards or CPUs on a computer.
1212
13-
You need to first setup the `CUDA_VISIBLE_DEVICES <http://acceleware.com/blog/cudavisibledevices-masking-gpus>`_
13+
You need to first setup the `CUDA_VISIBLE_DEVICES <http://acceleware.com/blog/cudavisibledevices-masking-gpus>`_
1414
to tell ``tl train`` which GPUs are available. If the CUDA_VISIBLE_DEVICES is not given,
15-
``tl train`` would try best to discover all available GPUs.
15+
``tl train`` would try best to discover all available GPUs.
1616
1717
In distribute training, each TensorFlow program needs a TF_CONFIG environment variable to describe
18-
the cluster. It also needs a master daemon to
18+
the cluster. It also needs a master daemon to
1919
monitor all trainers. ``tl train`` is responsible
20-
for automatically managing these two tasks.
20+
for automatically managing these two tasks.
2121
2222
Usage
2323
-----
2424
2525
tl train [-h] [-p NUM_PSS] [-c CPU_TRAINERS] <file> [args [args ...]]
2626
2727
.. code-block:: bash
28-
28+
2929
# example of using GPU 0 and 1 for training mnist
3030
CUDA_VISIBLE_DEVICES="0,1"
3131
tl train example/tutorial_mnist_distributed.py
@@ -56,13 +56,13 @@
5656
-----
5757
A parallel training program would require multiple parameter servers
5858
to help parallel trainers to exchange intermediate gradients.
59-
The best number of parameter servers is often proportional to the
59+
The best number of parameter servers is often proportional to the
6060
size of your model as well as the number of CPUs available.
6161
You can control the number of parameter servers using the ``-p`` parameter.
6262
6363
If you have a single computer with massive CPUs, you can use the ``-c`` parameter
6464
to enable CPU-only parallel training.
65-
The reason we are not supporting GPU-CPU co-training is because GPU and
65+
The reason we are not supporting GPU-CPU co-training is because GPU and
6666
CPU are running at different speeds. Using them together in training would
6767
incur stragglers.
6868

0 commit comments

Comments
 (0)