Skip to content

Commit 9f9280a

Browse files
committed
Add keras article
1 parent 9f068e9 commit 9f9280a

File tree

1 file changed

+185
-0
lines changed

1 file changed

+185
-0
lines changed
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
title: Train and register Keras models running on TensorFlow
3+
titleSuffix: Azure Machine Learning service
4+
description: This article shows you how to train and register a Keras model running on TensorFlow using Azure Machine Learning service.
5+
services: machine-learning
6+
ms.service: machine-learning
7+
ms.subservice: core
8+
ms.topic: conceptual
9+
ms.author: minxia
10+
author: mx-iao
11+
ms.date: 06/07/2019
12+
ms.custom: seodec18
13+
---
14+
15+
# Train and register TensorFlow models at scale with Azure Machine Learning service
16+
17+
This article shows you how to train and register a Keras model built on TensorFlow using Azure Machine Learning service. It uses the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify handwritten digits using a deep neural network built using the [Keras Python library](https://keras.io) running on top of [TensorFlow](https://www.tensorflow.org/overview).
18+
19+
Keras is a high-level neural network API capable of running top of other frameworks. Keras is used to simplify the process of building neural networks. With Azure Machine Learning service, you can rapidly scale out open-source training jobs using elastic cloud compute resources. You can also track your training runs, version models, deploy models, and much more.
20+
21+
Whether you're developing a Keras model from the ground-up or you're bringing an existing model into the cloud, Azure Machine Learning service can help you build production-ready models.
22+
23+
## Prerequisites
24+
25+
- An Azure subscription. Try the [free or paid version of Azure Machine Learning service](https://aka.ms/AMLFree) today.
26+
- [Install the Azure Machine Learning SDK for Python](setup-create-workspace.md#sdk)
27+
- [Download the sample script files](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-keras) `mnist-keras.py` and `utils.py`
28+
29+
You can also find a completed [Jupyter Notebook version](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) of this guide on GitHub samples page. The notebook includes expanded sections covering intelligent hyperparameter tuning and model deployment.
30+
31+
## Set up the experiment
32+
33+
This section sets up the training experiment by loading the required python packages, initializing a workspace, creating an experiment, and uploading the training data and training scripts.
34+
35+
### Import packages
36+
37+
First, import the necessary Python libraries.
38+
39+
```Python
40+
import os
41+
import urllib
42+
import shutil
43+
import azureml
44+
45+
from azureml.core import Experiment
46+
from azureml.core import Workspace, Run
47+
48+
from azureml.core.compute import ComputeTarget, AmlCompute
49+
from azureml.core.compute_target import ComputeTargetException
50+
```
51+
52+
### Initialize a workspace
53+
54+
The [Azure Machine Learning service workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a [`workspace`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py) object.
55+
56+
Create a workspace by finding a value for the <azure-subscription-id> parameter in the [subscriptions list in the Azure portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Use any subscription in which your role is owner or contributor. For more information on roles, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md) article
57+
58+
```Python
59+
ws = Workspace.create(name='myworkspace',
60+
subscription_id='<azure-subscription-id>',
61+
resource_group='myresourcegroup',
62+
create_resource_group=True,
63+
location='<select-location>' # For example: 'eastus2'
64+
)
65+
```
66+
67+
### Create an experiment
68+
69+
Create an experiment and a folder to hold your training scripts. In this example, create an experiment called "keras-mnist".
70+
71+
```Python
72+
script_folder = './keras-mnist'
73+
os.makedirs(script_folder, exist_ok=True)
74+
75+
exp = Experiment(workspace=ws, name='keras-mnist')
76+
```
77+
78+
### Upload dataset and scripts
79+
80+
The [datastore](how-to-access-data.md) is a place where data can be stored and accessed by mounting or copying the data to the compute target. Each workspace provides a default datastore. Upload the data and training scripts to the datastore so that they can be easily accessed during training.
81+
82+
1. Download the MNIST dataset locally.
83+
84+
```Python
85+
os.makedirs('./data/mnist', exist_ok=True)
86+
87+
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')
88+
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')
89+
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')
90+
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')
91+
```
92+
93+
1. Upload the MNIST dataset to the default datastore.
94+
95+
```Python
96+
ds = ws.get_default_datastore()
97+
ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)
98+
```
99+
100+
1. Upload the Keras training script, `keras_mnist.py`, and the helper file, `utils.py`.
101+
102+
```Python
103+
shutil.copy('./keras_mnist.py', script_folder)
104+
shutil.copy('./utils.py', script_folder)
105+
```
106+
107+
## Get the default compute target
108+
109+
Each workspace comes with two, default compute targets: a gpu-based compute target and a cpu-based compute target. The default compute targets have autoscale set to 0, which means they are not allocated until you use it. WIn this example, use the default GPU compute target.
110+
111+
```Python
112+
compute_target = ws.get_default_compute_target(type="GPU")
113+
```
114+
115+
For more information on compute targets, see the [what is a compute target](concept-compute-target.md) article.
116+
117+
## Create a TensorFlow estimator and import Keras
118+
119+
The [TensorFlow estimator](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) provides a simple way of launching TensorFlow training jobs on compute target. Since Keras runs on top of TensorFlow, you can use the TensorFlow estimator and import the Keras library using the `pip_packages` argument.
120+
121+
The TensorFlow estimator is implemented through the generic [`estimator`](https://docs.microsoft.com//python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) class, which can be used to support any framework. For more information about training models using the generic estimator, see [train models with Azure Machine Learning using estimator](how-to-train-ml-models.md)
122+
123+
```Python
124+
script_params = {
125+
'--data-folder': ds.path('mnist').as_mount(),
126+
'--batch-size': 50,
127+
'--first-layer-neurons': 300,
128+
'--second-layer-neurons': 100,
129+
'--learning-rate': 0.001
130+
}
131+
132+
est = TensorFlow(source_directory=script_folder,
133+
entry_script='keras_mnist.py',
134+
script_params=script_params,
135+
compute_target=compute_target,
136+
pip_packages=['keras', 'matplotlib'],
137+
use_gpu=True)
138+
```
139+
140+
## Submit a run
141+
142+
The [Run object](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run%28class%29?view=azure-ml-py) provides the interface to the run history while the job is running and after it has completed.
143+
144+
```Python
145+
run = exp.submit(est)
146+
run.wait_for_completion(show_output=True)
147+
```
148+
149+
As the Run is executed, it goes through the following stages:
150+
151+
- **Preparing**: A docker image is created according to the TensorFlow estimator. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress.
152+
153+
- **Scaling**: The cluster attempts to scale up if the Batch AI cluster requires more nodes to execute the run than are currently available.
154+
155+
- **Running**: All scripts in the script folder are uploaded to the compute target, data stores are mounted or copied, and the entry_script is executed. Outputs from stdout and the ./logs folder are streamed to the run history and can be used to monitor the run.
156+
157+
- **Post-Processing**: The ./outputs folder of the run is copied over to the run history.
158+
159+
## Register the model
160+
161+
Once you've trained the model, you can register it to your workspace. Model registration lets you store and version your models in your workspace to simplify [model management and deployment](concept-model-management-and-deployment.md).
162+
163+
```Python
164+
model = run.register_model(model_name='keras-dnn-mnist', model_path='outputs/model')
165+
```
166+
167+
You can also download a local copy of the model by using the Run object. In the training script `mnist-keras.py`, a TensorFlow saver object persists the model to a local folder (local to the compute target). You can use the Run object to download a copy.
168+
169+
```Python
170+
# Create a model folder in the current directory
171+
os.makedirs('./model', exist_ok=True)
172+
173+
for f in run.get_file_names():
174+
if f.startswith('outputs/model'):
175+
output_file_path = os.path.join('./model', f.split('/')[-1])
176+
print('Downloading from {} to {} ...'.format(f, output_file_path))
177+
run.download_file(name=f, output_file_path=output_file_path)
178+
```
179+
180+
## Next steps
181+
182+
In this article, you trained and registered a TensorFlow model on Azure Machine Learning service. To learn how to deploy a model, continue on to our model deployment article.
183+
184+
> [!div class="nextstepaction"]
185+
> [How and where to deploy models](how-to-deploy-and-where.md)

0 commit comments

Comments
 (0)