Skip to content

Commit 3026a6d

Browse files
committed
minor tweaks
1 parent f57b640 commit 3026a6d

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/machine-learning/concept-distributed-training.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,19 +13,19 @@ ms.date: 03/27/2020
1313

1414
# Distributed training with Azure Machine Learning
1515

16-
In this article, you learn about distributed training and how Azure Machine Learning supports it.
16+
In this article, you learn about distributed training and how Azure Machine Learning supports it for deep learning models.
1717

1818
In distributed training the workload to train a model is split up and shared among multiple mini processors, called worker nodes. These worker nodes work in parallel to speed up model training. Distributed training is well suited for compute and time intensive tasks, like [deep learning](concept-deep-learning-vs-machine-learning.md) for training deep neural networks.
1919

2020
## Deep learning and distributed training
2121

22-
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training, the [Azure Machine Learning SDK in Python](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py) supports integrations with popular deep learning frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
22+
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training on deep learning models, the [Azure Machine Learning SDK in Python](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py) supports integrations with popular frameworks, PyTorch and TensorFlow. Both frameworks employ data parallelism for distributed training, and leverage [horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) for optimizing compute speeds.
2323

2424
* [Distributed training with PyTorch](how-to-train-pytorch.md#distributed-training)
2525

2626
* [Distributed training with TensorFlow](how-to-train-tensorflow.md#distributed-training)
2727

28-
For training traditional ML models, see [train models with Azure Machine Learning](concept-train-machine-learning-model.md#python-sdk) for the different ways to train models using the Python SDK.
28+
For training ML models that don't require distributed training, see [train models with Azure Machine Learning](concept-train-machine-learning-model.md#python-sdk) for the different ways to train models using the Python SDK.
2929

3030
## Data parallelism
3131

0 commit comments

Comments
 (0)