You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-train-with-datasets.md
+56-57Lines changed: 56 additions & 57 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,9 @@ ms.author: sihhu
10
10
author: MayMSFT
11
11
manager: cgronlun
12
12
ms.reviewer: nibaccam
13
-
ms.date: 03/09/2020
13
+
ms.date: 04/20/2020
14
14
15
-
# Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute to train my machine learning models.
15
+
# Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
16
16
17
17
---
18
18
@@ -36,22 +36,16 @@ To create and train with datasets, you need:
36
36
> [!Note]
37
37
> Some Dataset classes have dependencies on the [azureml-dataprep](https://docs.microsoft.com/python/api/azureml-dataprep/?view=azure-ml-py) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
38
38
39
+
## Use datasets directly in training scripts
39
40
40
-
## Local Options
41
-
42
-
## Remote Options
43
-
44
-
There are two ways to consume Azure Machine Learning datasets in remote experiment training runs:
45
-
46
-
Option 1: If you have structured data, create a TabularDataset and use it directly in your training script.
47
-
48
-
Option 2: If you have unstructured data, create a FileDataset and mount or download files to a remote compute for training.
49
-
### Option 1: Use datasets directly in training scripts
41
+
If you have structured data, create a TabularDataset and use it directly in your training script for your local or remote experiment.
50
42
51
43
In this example, you create a [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) and use it as a direct input to your `estimator` object for training.
52
44
53
45
### Create a TabularDataset
54
46
47
+
TabularDataset objects provide the ability to load the data into a pandas or spark DataFrame so that you can work with familiar data preparation and training libraries without having to leave your notebook. To leverage this capability, see [how to access input datasets](#access-input-datasets).
48
+
55
49
The following code creates an unregistered TabularDataset from a web url. You can also create datasets from local files or paths in datastores. Learn more about [how to create datasets](https://aka.ms/azureml/howto/createdatasets).
### Access the input dataset in your training script
65
-
66
-
TabularDataset objects provide the ability to load the data into a pandas or spark DataFrame so that you can work with familiar data preparation and training libraries. To leverage this capability, you can pass a TabularDataset as the input in your training configuration, and then retrieve it in your script.
67
-
68
-
To do so, access the input dataset through the [`Run`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run.run?view=azure-ml-py) object in your training script and use the [`to_pandas_dataframe()`](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--null---out-of-range-datetime--null--) method.
69
-
70
-
```Python
71
-
%%writefile $script_folder/train_titanic.py
72
-
73
-
from azureml.core import Dataset, Run
74
-
75
-
run = Run.get_context()
76
-
# get the input dataset by name
77
-
dataset = run.input_datasets['titanic']
78
-
# load the TabularDataset to pandas DataFrame
79
-
df = dataset.to_pandas_dataframe()
80
-
```
81
-
82
58
### Configure the estimator
83
59
84
60
An [estimator](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) object is used to submit the experiment run. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as a generic estimator.
@@ -87,7 +63,7 @@ This code creates a generic estimator object, `est`, that specifies
87
63
88
64
* A script directory for your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
89
65
* The training script, *train_titanic.py*.
90
-
* The input dataset for training, `titanic`. `as_named_input()` is required so that the input dataset can be referenced by the assigned name in your training script.
66
+
* The input dataset for training, `titanic_ds`. `as_named_input()` is required so that the input dataset can be referenced by the assigned name`titanic` in your training script.
## Option 2: Mount files to a remote compute target
109
-
110
-
If you want to make your data files available on the compute target for training, use [FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.file_dataset.filedataset?view=azure-ml-py) to mount or download files referred by it.
85
+
If you want to get the dataset used in your training run
111
86
112
-
### Mount vs. Download
113
-
114
-
Mounting or downloading files of any format are supported for datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL.
115
87
116
-
When you mount a dataset, you attach the files referenced by the dataset to a directory (mount point) and make it available on the compute target. Mounting is supported for Linux-based computes, including Azure Machine Learning Compute, virtual machines, and HDInsight. When you download a dataset, all the files referenced by the dataset will be downloaded to the compute target. Downloading is supported for all compute types.
88
+
The following code uses the [`get_context()`]() method in the [`Run`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run.run?view=azure-ml-py) class to access the input TabularDataset, `titanic`, in the training script. Then uses the [`to_pandas_dataframe()`](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--null---out-of-range-datetime--null--) method to load that dataset into a pandas dataframe.
117
89
118
-
If your script processes all files referenced by the dataset, and your compute disk can fit your full dataset, downloading is recommended to avoid the overhead of streaming data from storage services. If your data size exceeds the compute disk size, downloading is not possible. For this scenario, we recommend mounting since only the data files used by your script are loaded at the time of processing.
90
+
```Python
91
+
%%writefile $script_folder/train_titanic.py
119
92
120
-
The following code mounts `dataset` to the temp directory at `mounted_path`
93
+
from azureml.core import Dataset, Run
121
94
122
-
```python
123
-
import tempfile
124
-
mounted_path=tempfile.mkdtemp()
95
+
run = Run.get_context()
96
+
# get the input dataset by name
97
+
dataset=run.input_datasets['titanic']
125
98
126
-
# mount dataset onto the mounted_path of a Linux-based compute
127
-
mount_context = dataset.mount(mounted_path)
99
+
# load the TabularDataset to pandas DataFrame
100
+
df = dataset.to_pandas_dataframe()
101
+
```
102
+
## Mount files to remote compute targets
128
103
129
-
mount_context.start()
104
+
If you have unstructured data, create a [FileDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) and either mount or download your data files to make them available to your remote compute target for training. Learn about when to use [mount vs. download](#mount-vs.-download) for your training experiments.
130
105
131
-
import os
132
-
print(os.listdir(mounted_path))
133
-
print (mounted_path)
134
-
```
106
+
The following example creates a FileDataset and mounts the dataset to the compute target by passing it as an argument in the estimator for training.
Besides passing the dataset through the `inputs` parameter in the estimator, you can also pass the dataset through `script_params` and get the data path (mounting point) in your training script via arguments. This way, you can keep your training script independent of azureml-sdk. In other words, you will be able use the same training script for local debugging and remote training on any cloud platform.
126
+
We recommend passing the dataset as an argument when mounting. Besides passing the dataset through the `inputs` parameter in the estimator, you can also pass the dataset through `script_params` and get the data path (mounting point) in your training script via arguments. This way, you will be able use the same training script for local debugging and remote training on any cloud platform.
155
127
156
-
An [SKLearn](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.sklearn.sklearn?view=azure-ml-py) estimator object is used to submit the run for scikit-learn experiments. Learn more about training with the [SKlearn estimator](how-to-train-scikit-learn.md).
128
+
An [SKLearn](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.sklearn.sklearn?view=azure-ml-py) estimator object is used to submit the run for scikit-learn experiments. After you submit the run, data files referred by the `mnist` dataset will be mounted to the compute target. Learn more about training with the [SKlearn estimator](how-to-train-scikit-learn.md).
157
129
158
130
```Python
159
131
from azureml.train.sklearn import SKLearn
@@ -174,10 +146,10 @@ est = SKLearn(source_directory=script_folder,
174
146
run = experiment.submit(est)
175
147
run.wait_for_completion(show_output=True)
176
148
```
177
-
178
149
### Retrieve the data in your training script
150
+
If .............................
179
151
180
-
After you submit the run, data files referred by the `mnist` dataset will be mounted to the compute target. The following code shows how to retrieve the data in your script.
152
+
The following code shows how to retrieve the data in your script.
Mounting or downloading files of any format are supported for datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL.
190
+
191
+
When you mount a dataset, you attach the files referenced by the dataset to a directory (mount point) and make it available on the compute target. Mounting is supported for Linux-based computes, including Azure Machine Learning Compute, virtual machines, and HDInsight.
192
+
193
+
When you download a dataset, all the files referenced by the dataset will be downloaded to the compute target. Downloading is supported for all compute types.
194
+
195
+
If your script processes all files referenced by the dataset, and your compute disk can fit your full dataset, downloading is recommended to avoid the overhead of streaming data from storage services. If your data size exceeds the compute disk size, downloading is not possible. For this scenario, we recommend mounting since only the data files used by your script are loaded at the time of processing.
196
+
197
+
The following code mounts `dataset` to the temp directory at `mounted_path`
198
+
199
+
```python
200
+
import tempfile
201
+
mounted_path = tempfile.mkdtemp()
202
+
203
+
# mount dataset onto the mounted_path of a Linux-based compute
204
+
mount_context = dataset.mount(mounted_path)
205
+
206
+
mount_context.start()
207
+
208
+
import os
209
+
print(os.listdir(mounted_path))
210
+
print (mounted_path)
211
+
```
212
+
214
213
## Notebook examples
215
214
216
215
The [dataset notebooks](https://aka.ms/dataset-tutorial) demonstrate and expand upon concepts in this article.
217
216
218
217
## Next steps
219
218
220
-
*[Auto train machine learning models](how-to-auto-train-remote.md) with TabularDatasets
219
+
*[Auto train machine learning models](how-to-auto-train-remote.md) with TabularDatasets.
221
220
222
-
*[Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets
221
+
*[Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets.
223
222
224
-
*[Train with datasets using pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb)
223
+
*[Train with datasets using pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb).
0 commit comments