You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/tutorial-deploy-models-with-aml.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ In this part of the tutorial, you use Azure Machine Learning service for the fol
32
32
Container Instances is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](how-to-deploy-and-where.md).
33
33
34
34
>[!NOTE]
35
-
> Code in this article was tested with Azure Machine Learning SDK version 1.0.8.
35
+
> Code in this article was tested with Azure Machine Learning SDK version 1.0.41.
36
36
37
37
## Prerequisites
38
38
Skip to [Set the development environment](#start) to read through the notebook steps.
Copy file name to clipboardExpand all lines: articles/machine-learning/service/tutorial-train-models-with-aml.md
+28-39Lines changed: 28 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ ms.custom: seodec18
18
18
19
19
In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**.
20
20
21
-
This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning service. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multiclass classifier to identify the digit a given image represents.
21
+
This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning service. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multiclass classifier to identify the digit a given image represents.
22
22
23
23
Learn how to take the following actions:
24
24
@@ -28,12 +28,12 @@ Learn how to take the following actions:
28
28
> * Train a simple logistic regression model on a remote cluster.
29
29
> * Review training results and register the best model.
30
30
31
-
You learn how to select a model and deploy it in [part two of this tutorial](tutorial-deploy-models-with-aml.md).
31
+
You learn how to select a model and deploy it in [part two of this tutorial](tutorial-deploy-models-with-aml.md).
32
32
33
33
If you don’t have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning service](https://aka.ms/AMLFree) today.
34
34
35
35
>[!NOTE]
36
-
> Code in this article was tested with Azure Machine Learning SDK version 1.0.8.
36
+
> Code in this article was tested with Azure Machine Learning SDK version 1.0.41.
37
37
38
38
## Prerequisites
39
39
@@ -47,8 +47,8 @@ Skip to [Set up your development environment](#start) to read through the notebo
47
47
* The configuration file for the workspace in the same directory as the notebook
48
48
49
49
Get all these prerequisites from either of the sections below.
50
-
51
-
* Use a [cloud notebook server in your workspace](#azure)
50
+
51
+
* Use a [cloud notebook server in your workspace](#azure)
52
52
* Use [your own notebook server](#server)
53
53
54
54
### <aname="azure"></a>Use a cloud notebook server in your workspace
@@ -59,7 +59,6 @@ It's easy to get started with your own cloud-based notebook server. The [Azure M
59
59
60
60
* After you launch the notebook webpage, open the **tutorials/img-classification-part1-training.ipynb** notebook.
61
61
62
-
63
62
### <aname="server"></a>Use your own Jupyter notebook server
```('./data/test-labels.gz', <http.client.HTTPMessage at 0x7f40864c77b8>)```
186
184
187
185
### Display some sample images
188
186
189
187
Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. This step requires a `load_data` function that's included in an `util.py` file. This file is included in the sample folder. Make sure it's placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays:
190
188
191
-
192
-
193
189
```python
194
190
# make sure utils.py is in the same directory as this code
@@ -323,36 +319,32 @@ Notice how the script gets data and saves models:
323
319
shutil.copy('utils.py', script_folder)
324
320
```
325
321
326
-
327
322
### Create an estimator
328
323
329
-
An estimator object is used to submit the run. Create your estimator by running the following code to define these items:
324
+
An [SKLearn estimator](https://docs.microsoft.com/python/api/azureml-train-core/azureml.train.sklearn.sklearn?view=azure-ml-py) object is used to submit the run. Create your estimator by running the following code to define these items:
330
325
331
326
* The name of the estimator object, `est`.
332
-
* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
327
+
* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
333
328
* The compute target. In this case, you use the Azure Machine Learning compute cluster you created.
334
329
* The training script name, **train.py**.
335
-
* Parameters required from the training script.
336
-
* Python packages needed for training.
330
+
* Parameters required from the training script.
337
331
338
332
In this tutorial, this target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for run. The **data_folder** is set to use the datastore, `ds.path('mnist').as_mount()`:
339
333
340
334
```python
341
-
from azureml.train.estimatorimportEstimator
335
+
from azureml.train.sklearnimportSKLearn
342
336
343
337
script_params = {
344
338
'--data-folder': ds.path('mnist').as_mount(),
345
-
'--regularization': 0.8
339
+
'--regularization': 0.5
346
340
}
347
341
348
-
est =Estimator(source_directory=script_folder,
342
+
est =SKLearn(source_directory=script_folder,
349
343
script_params=script_params,
350
344
compute_target=compute_target,
351
-
entry_script='train.py',
352
-
conda_packages=['scikit-learn'])
345
+
entry_script='train.py')
353
346
```
354
347
355
-
356
348
### Submit the job to the cluster
357
349
358
350
Run the experiment by submitting the estimator object:
@@ -370,7 +362,7 @@ In total, the first run takes **about 10 minutes**. But for subsequent runs, as
370
362
371
363
What happens while you wait:
372
364
373
-
-**Image creation**: A Docker image is created that matches the Python environment specified by the estimator. The image is uploaded to the workspace. Image creation and uploading takes **about five minutes**.
365
+
-**Image creation**: A Docker image is created that matches the Python environment specified by the estimator. The image is uploaded to the workspace. Image creation and uploading takes **about five minutes**.
374
366
375
367
This stage happens once for each Python environment because the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress by using these logs.
376
368
@@ -380,29 +372,26 @@ What happens while you wait:
380
372
381
373
-**Post-processing**: The **./outputs** directory of the run is copied over to the run history in your workspace, so you can access these results.
382
374
383
-
384
-
You can check the progress of a running job in several ways. This tutorial uses a Jupyter widget and a `wait_for_completion` method.
375
+
You can check the progress of a running job in several ways. This tutorial uses a Jupyter widget and a `wait_for_completion` method.
385
376
386
377
### Jupyter widget
387
378
388
379
Watch the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10 to 15 seconds until the job finishes:
389
380
390
-
391
381
```python
392
382
from azureml.widgets import RunDetails
393
383
RunDetails(run).show()
394
384
```
395
385
396
-
This still snapshot is the widget shown at the end of training:
386
+
The widget will look like the following at the end of training:
If you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).
401
391
402
392
### Get log results upon completion
403
393
404
-
Model training and monitoring happen in the background. Wait until the model has finished training before you run more code. Use `wait_for_completion` to show when the model training is finished:
405
-
394
+
Model training and monitoring happen in the background. Wait until the model has finished training before you run more code. Use `wait_for_completion` to show when the model training is finished:
406
395
407
396
```python
408
397
run.wait_for_completion(show_output=False) # specify True for a verbose log
@@ -415,6 +404,7 @@ You now have a model trained on a remote cluster. Retrieve the accuracy of the m
415
404
```python
416
405
print(run.get_metrics())
417
406
```
407
+
418
408
The output shows the remote model has accuracy of 0.9204:
You can also delete just the Azure Machine Learning Compute cluster. However, autoscale is turned on, and the cluster minimum is zero. So this particular resource won't incur additional compute charges when not in use:
447
437
448
-
449
438
```python
450
439
# optionally, delete the Azure Machine Learning Compute cluster
0 commit comments