Skip to content

Commit e63c32e

Browse files
committed
more grammar
1 parent 9821437 commit e63c32e

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/machine-learning/concept-distributed-training.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.date: 03/24/2020
1717

1818
Distributed training refers to the ability to share data loads and training tasks across multiple GPUs to accelerate model training. The typical use case for distributed training is for training [deep learning](concept-deep-learning-vs-machine-learning.md) models.
1919

20-
Deep neural networks are often compute intensive as they require large learning workloads in order to processing millions of examples and parameters across its multiple layers. This deep learning lends itself well to distributed training, since running tasks in parallel instead of serially saves time and compute resources.
20+
Deep neural networks are often compute intensive as they require large learning workloads in order to process millions of examples and parameters across its multiple layers. This deep learning lends itself well to distributed training, since running tasks in parallel instead of serially saves time and compute resources.
2121

2222
## Distributed training in Azure Machine Learning
2323

@@ -50,8 +50,8 @@ In model parallelism, worker nodes only need to synchronize the shared parameter
5050

5151
* Learn how to [Set up training environments](how-to-set-up-training-targets.md).
5252

53-
* Train ML models with TensorFlow(how-to-train-tensorflow.md)
53+
* [Train ML models with TensorFlow](how-to-train-tensorflow.md).
5454

55-
* Train ML models with PyTorch(how-to-train-pytorch.md)
55+
* [Train ML models with PyTorch](how-to-train-pytorch.md).
5656

5757

0 commit comments

Comments
 (0)