Skip to content

Commit 027d99a

Browse files
committed
more intro work
1 parent 2391115 commit 027d99a

File tree

1 file changed

+3
-6
lines changed

1 file changed

+3
-6
lines changed

articles/machine-learning/concept-distributed-training.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,12 @@ ms.date: 03/27/2020
1313

1414
# Distributed training with Azure Machine Learning
1515

16-
In distributed training the work load to train a model is split up and shared among multiple mini processors, called worker nodes. These worker nodes work in parallel to speed up model training.
17-
18-
This training is well suited for compute and time intensive tasks, like training deep neural networks and [deep learning](concept-deep-learning-vs-machine-learning.md).
19-
20-
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). Azure Machine Learning currently only supports integrations with frameworks that can perform data parallelism.
16+
In distributed training the work load to train a model is split up and shared among multiple mini processors, called worker nodes. These worker nodes work in parallel to speed up model training. Distrbuted training is well suited for compute and time intensive tasks, like [deep learning](concept-deep-learning-vs-machine-learning.md) for training deep neural networks.
2117

2218
## Distributed training in Azure Machine Learning
2319

24-
Azure Machine Learning is integrated with popular deep learning frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
20+
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism).
21+
Azure Machine Learning supports integrations with the popular deep learning frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
2522

2623
* [Distributed training with PyTorch in the Python SDK](how-to-train-pytorch.md#distributed-training)
2724

0 commit comments

Comments
 (0)