Skip to content

Commit 21d6c84

Browse files
authored
Acrolinx
1 parent 185e11a commit 21d6c84

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Within Azure Synapse Analytics, users can quickly get started with Horovod using
2121
- Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
2222

2323
> [!NOTE]
24-
> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
24+
> The Preview for Azure Synapse GPU-enabled pools is deprecated.
2525
2626
## Configure the Apache Spark session
2727

@@ -46,7 +46,7 @@ In the example, you can see how the Spark configurations can be passed with the
4646
}
4747
```
4848

49-
For this tutorial, we will use the following configurations:
49+
For this tutorial, we'll use the following configurations:
5050

5151
```python
5252

@@ -97,7 +97,7 @@ from azure.synapse.ml.horovodutils import AdlsStore
9797

9898
## Connect to alternative storage account
9999

100-
We need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you need to modify the following properties: ```remote_url```, ```account_name```, and ```linked_service_name```.
100+
We need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you're using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you need to modify the following properties: ```remote_url```, ```account_name```, and ```linked_service_name```.
101101

102102
```python
103103
num_proc = 3 # equal to numExecutors
@@ -127,7 +127,7 @@ print(adls_store_path)
127127

128128
## Prepare dataset
129129

130-
Next, we will prepare the dataset for training. In this tutorial, we will use the MNIST dataset from [Azure Open Datasets](/azure/open-datasets/dataset-mnist?tabs=azureml-opendatasets).
130+
Next, we'll prepare the dataset for training. In this tutorial, we'll use the MNIST dataset from [Azure Open Datasets](/azure/open-datasets/dataset-mnist?tabs=azureml-opendatasets).
131131

132132
```python
133133
# Initialize SparkSession
@@ -148,7 +148,7 @@ mnist_df.head()
148148

149149
## Process data with Apache Spark
150150

151-
Now, we will create an Apache Spark dataframe. This dataframe will be used with the ```HorovodEstimator``` for training.
151+
Now, we'll create an Apache Spark dataframe. This dataframe will be used with the ```HorovodEstimator``` for training.
152152

153153
```python
154154
# Create Spark DataFrame for training
@@ -167,7 +167,7 @@ train_df.count()
167167

168168
## Define DNN model
169169

170-
Once we are finished processing our dataset, we can now define our PyTorch model. The same code could also be used to train a single-node PyTorch model.
170+
Once we're finished processing our dataset, we can now define our PyTorch model. The same code could also be used to train a single-node PyTorch model.
171171

172172
```python
173173
# Define the PyTorch model without any Horovod-specific parameters

0 commit comments

Comments
 (0)