You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/how-to-deploy-and-where.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,14 +9,16 @@ ms.topic: conceptual
9
9
ms.author: jordane
10
10
author: jpe316
11
11
ms.reviewer: larryfr
12
-
ms.date: 05/02/2019
12
+
ms.date: 05/21/2019
13
13
14
14
ms.custom: seoapril2019
15
15
---
16
16
17
17
# Deploy models with the Azure Machine Learning service
18
18
19
-
Learn how to deploy your machine learning model as a web service in the Azure cloud, or to IoT Edge devices. The information in this document teaches you how to deploy to the following compute targets:
19
+
Learn how to deploy your machine learning model as a web service in the Azure cloud, or to IoT Edge devices.
20
+
21
+
The following compute targets, or compute resources, can be used to host your service deployment.
20
22
21
23
| Compute target | Deployment type | Description |
22
24
| ----- | ----- | ----- |
@@ -26,25 +28,24 @@ Learn how to deploy your machine learning model as a web service in the Azure cl
26
28
|[Azure Machine Learning Compute](how-to-run-batch-predictions.md)| (Preview) Batch inference | Run batch scoring on serverless compute. Supports normal and low-priority VMs. |
27
29
|[Azure IoT Edge](#iotedge)| (Preview) IoT module | Deploy & serve ML models on IoT devices. |
28
30
29
-
## Deployment workflow
30
-
31
-
The process of deploying a model is similar for all compute targets:
31
+
The workflow is similar for all compute targets:
32
32
33
-
1. Register model(s).
34
-
1. Deploy model(s).
35
-
1. Test deployed model(s).
33
+
1. Register the model.
34
+
1. Prepare to deploy (specify assets, usage, compute target)
35
+
1. Deploy the model to the compute target.
36
+
1. Test the deployed model, also called web service.
36
37
37
38
For more information on the concepts involved in the deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning Service](concept-model-management-and-deployment.md).
38
39
39
-
## Prerequisites for deployment
40
+
## Prerequisites
40
41
41
42
- A model. If you do not have a trained model, you can use the model & dependency files provided in [this tutorial](https://aka.ms/azml-deploy-cloud).
42
43
43
44
- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), or the [Azure Machine Learning Python SDK](https://aka.ms/aml-sdk).
44
45
45
-
## <aid="registermodel"></a> Register a machine learning model
46
+
## <aid="registermodel"></a> Register ML models
46
47
47
-
The model registry is a way to store and organize your trained models in the Azure cloud. Models are registered in your Azure Machine Learning service workspace. The model can be trained using Azure Machine Learning, or imported from a model trained elsewhere. The following examples demonstrate how to register a model from file:
48
+
Register your machine learning models in your Azure Machine Learning workspace. The model can come from Azure Machine Learning or can come from somewhere else. The following examples demonstrate how to register a model from file:
48
49
49
50
### Register a model from an Experiment Run
50
51
@@ -86,7 +87,7 @@ az ml model register -n onnx_mnist -p mnist/model.onnx
86
87
87
88
For more information, see the reference documentation for the [Model class](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py).
88
89
89
-
## How to deploy
90
+
## Prepare to deploy
90
91
91
92
To deploy as a web service, you must create an inference configuration (`InferenceConfig`) and a deployment configuration. Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data. In the inference config, you specify the scripts and dependencies needed to serve your model. In the deployment config you specify details of how to serve the model on the compute target.
0 commit comments