You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-train-chainer.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Train deep learning neural network with Chainer
3
3
titleSuffix: Azure Machine Learning
4
-
description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning's Chainer estimator class. The example script classifis handwritten digit images to build a deep learning neural network using the Chainer Python library running on top of numpy.
4
+
description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning's Chainer estimator class. The example script classifies handwritten digit images to build a deep learning neural network using the Chainer Python library running on top of numpy.
5
5
services: machine-learning
6
6
ms.service: machine-learning
7
7
ms.subservice: core
@@ -81,7 +81,7 @@ In this tutorial, the training script **chainer_mnist.py** is already provided f
81
81
82
82
To use Azure ML's tracking and metrics capabilities, add a small amount of Azure ML code inside your training script. The training script **chainer_mnist.py** shows how to log some metrics to your Azure ML run using the `Run` object within the script.
83
83
84
-
The provided training script uses example data from the chainer `datasets.mnist.get_mnist` function. For your own data, you may need to use steps such as [Upload dataset and scripts](how-to-train-keras.md#upload-dataset-and-scripts) to make data available during training.
84
+
The provided training script uses example data from the chainer `datasets.mnist.get_mnist` function. For your own data, you may need to use steps such as [Upload dataset and scripts](how-to-train-keras.md) to make data available during training.
85
85
86
86
Copy the training script **chainer_mnist.py** into your project directory.
The [datastore](how-to-access-data.md) is a place where data can be stored and accessed by mounting or copying the data to the compute target. Each workspace provides a default datastore. Upload the data and training scripts to the datastore so that they can be easily accessed during training.
80
+
### Create a file dataset
90
81
91
-
1. Download the MNIST dataset locally.
82
+
A `FileDataset` object references one or multiple files in your workspace datastore or public urls. The files can be of any format, and the class provides you with the ability to download or mount the files to your compute. By creating a `FileDataset`, you create a reference to the data source location. If you applied any transformations to the data set, they will be stored in the data set as well. The data remains in its existing location, so no extra storage cost is incurred. See the [how-to](https://docs.microsoft.com/azure/machine-learning/service/how-to-create-register-datasets) guide on the `Dataset` package for more information.
1. Upload the Keras training script, `keras_mnist.py`, andthe helper file, `utils.py`.
96
+
Use the `register()` method to register the data set to your workspace so they can be shared with others, reused across various experiments, and referred to by name in your training script.
110
97
111
-
```Python
112
-
shutil.copy('./keras_mnist.py', script_folder)
113
-
shutil.copy('./utils.py', script_folder)
114
-
```
98
+
```python
99
+
dataset = dataset.register(workspace=ws,
100
+
name='mnist dataset',
101
+
description='training and test dataset',
102
+
create_new_version=True)
103
+
```
115
104
116
105
## Create a compute target
117
106
@@ -139,11 +128,22 @@ For more information on compute targets, see the [what is a compute target](conc
139
128
140
129
The [TensorFlow estimator](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) provides a simple way of launching TensorFlow training jobs on compute target. Since Keras runs on top of TensorFlow, you can use the TensorFlow estimator and import the Keras library using the `pip_packages` argument.
141
130
131
+
First get the data from the workspace datastore using the `Dataset` class.
The TensorFlow estimator is implemented through the generic [`estimator`](https://docs.microsoft.com//python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) class, which can be used to support any framework. Additionally, create a dictionary `script_params` that contains the DNN hyperparameter settings. For more information about training models using the generic estimator, see [train models with Azure Machine Learning using estimator](how-to-train-ml-models.md)
0 commit comments