Skip to content

Commit 1773b01

Browse files
Merge pull request #1852 from ssalgadodev/patch-31
Update concept-distributed-training.md
2 parents 0c19cf2 + e748736 commit 1773b01

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/machine-learning/concept-distributed-training.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ titleSuffix: Azure Machine Learning
44
description: Learn what type of distributed training Azure Machine Learning supports and the open source framework integrations available for distributed training.
55
services: machine-learning
66
ms.service: azure-machine-learning
7-
author: sdgilley
8-
ms.author: sgilley
7+
author: ssalgadodev
8+
ms.author: ssalgado
99
ms.reviewer: ratanase
1010
ms.subservice: training
1111
ms.custom: build-2023
1212
ms.topic: conceptual
13-
ms.date: 03/22/2024
13+
ms.date: 12/05/2024
1414
---
1515

1616
# Distributed training with Azure Machine Learning
@@ -21,7 +21,7 @@ In distributed training, the workload to train a model is split up and shared am
2121

2222
## Deep learning and distributed training
2323

24-
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training on deep learning models, the [Azure Machine Learning SDK in Python](/python/api/overview/azure/ml/intro) supports integrations with PyTorch and TensorFlow. Both are popular frameworks that employ data parallelism for distributed training, and can use [Horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) to optimize compute speeds.
24+
There are two main types of distributed training: [data parallelism](#data-parallelism) and [model parallelism](#model-parallelism). For distributed training on deep learning models, the [Azure Machine Learning SDK in Python](https://github.com/Azure/azure-sdk-for-python/blob/main/README.md) supports integrations with PyTorch and TensorFlow. Both are popular frameworks that employ data parallelism for distributed training, and can use [Horovod](https://horovod.readthedocs.io/en/latest/summary_include.html) to optimize compute speeds.
2525

2626
* [Distributed training with PyTorch](how-to-train-distributed-gpu.md#pytorch)
2727

0 commit comments

Comments
 (0)