Skip to content

Commit 9821437

Browse files
committed
grammar
1 parent 7c85a21 commit 9821437

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/machine-learning/concept-distributed-training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.date: 03/24/2020
1717

1818
Distributed training refers to the ability to share data loads and training tasks across multiple GPUs to accelerate model training. The typical use case for distributed training is for training [deep learning](concept-deep-learning-vs-machine-learning.md) models.
1919

20-
Deep neural networks are often computed intensive as they require large learning workloads in order to processing millions of examples and parameters across its multiple layers. This deep learning lends itself well to distributed training, since running tasks in parallel instead of serially saves time and compute resources.
20+
Deep neural networks are often compute intensive as they require large learning workloads in order to processing millions of examples and parameters across its multiple layers. This deep learning lends itself well to distributed training, since running tasks in parallel instead of serially saves time and compute resources.
2121

2222
## Distributed training in Azure Machine Learning
2323

0 commit comments

Comments
 (0)