You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The designer for Azure Machine Learning enables you to prep data, train, test, deploy, manage, and track machine learning models without writing code.
18
+
Azure Machine Learning designer lets you visually connect [datasets](#datasets) and [modules](#module) on an interactive canvas to create machine learning models. To learn how to get started with the designer, see [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md)
19
19
20
-
There is no programming required, you visually connect [datasets](#datasets)and[modules](#module) to construct your model.

31
+
## Model training and deployment
32
32
33
-
## Workflow
33
+
The designer gives you a visual canvas to build, test, and deploy machine learning models. With the designer you can:
34
34
35
-
The designer gives you an interactive, visual canvas to quickly build, test, and iterate on a model.
35
+
+ Drag-and-drop [datasets](#datasets) and [modules](#module) onto the canvas.
36
+
+ Connect the modules together to create a [pipeline draft](#pipeline-draft).
37
+
+ Submit a [pipeline run](#pipeline-run) using the compute resources in your Azure Machine Learning workspace.
38
+
+ Convert your **training pipelines** to **inference pipelines**.
39
+
+[Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit new pipeline runs with different parameters and datasets.
40
+
+ Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and datasets.
41
+
+ Publish a **batch inference pipeline** to make predictions on new data by using a previously trained model.
42
+
+[Deploy](#deploy) a **real-time inference pipeline** to a real-time endpoint to make predictions on new data in real time.
36
43
37
-
+ You drag-and-drop [datasets](#datasets) and [modules](#module) onto the canvas.
38
-
+ Connect the modules together to form an [pipeline](#pipeline).
39
-
+ Run the pipeline using the compute resource of the Machine Learning Service workspace.
40
-
+ Iterate on your model design by editing the pipeline and running it again.
41
-
+ When you're ready, convert your **training pipeline** to an **inference pipeline**.
42
-
+[Publish](#publish) your pipeline as an REST endpoint if you want to resubmit it without the Python code constructed it.
43
-
+[Deploy](#deployment) the inference pipeline as a pipeline endpoint or real-time endpoint so that your model can be accessed by others.
44
+

44
45
45
46
## Pipeline
46
47
47
-
Create an ML [pipeline](concept-azure-machine-learning-architecture.md#ml-pipelines)from scratch, or use an existing sample pipeline as a template. Each time you run a pipeline, artifacts are stored in your workspace. Pipeline runs are grouped into [experiments](concept-azure-machine-learning-architecture.md#experiments).
48
+
A [pipeline](concept-azure-machine-learning-architecture.md#ml-pipelines)consists of datasets and analytical modules, which you connect together. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
48
49
49
-
A pipeline consists of datasets and analytical modules, which you connect together to construct a model. Specifically, a valid pipeline has these characteristics:
50
+
### Pipeline draft
50
51
51
-
* Datasets may be connected only to modules.
52
-
* Modules may be connected to either datasets or other modules.
52
+
As you edit a pipeline in the designer, your progress is saved as a **pipeline draft**. You can edit a pipeline draft at any point by adding or removing modules, configuring compute targets, creating parameters, and so on.
53
+
54
+
A valid pipeline has these characteristics:
55
+
56
+
* Datasets can only connect to modules.
57
+
* Modules can only connect to either datasets or other modules.
53
58
* All input ports for modules must have some connection to the data flow.
54
59
* All required parameters for each module must be set.
55
60
61
+
When you're ready to run your pipeline draft, you submit a pipeline run.
62
+
63
+
### Pipeline run
56
64
57
-
To learn how to get started with the designer, see [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md).
65
+
Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline run**. You can go back to any pipeline run to inspect it for troubleshooting or auditing purposes. **Clone** a pipeline run to create a new pipeline draft for you to edit.
66
+
67
+
Pipeline runs are grouped into [experiments](concept-azure-machine-learning-architecture.md#experiments) to organize run history. You can set the experiment for every pipeline run.
58
68
59
69
## Datasets
60
70
@@ -64,7 +74,7 @@ A machine learning dataset makes it easy to access and work with your data. A nu
64
74
65
75
A module is an algorithm that you can perform on your data. The designer has a number of modules ranging from data ingress functions to training, scoring, and validation processes.
66
76
67
-
A module may have a set of parameters that you can use to configure the module's internal algorithms. When you select a module on the canvas, the module's parameters are displayed in the Properties pane to the right of the canvas. You can modify the parameters in that pane to tune your model.
77
+
A module may have a set of parameters that you can use to configure the module's internal algorithms. When you select a module on the canvas, the module's parameters are displayed in the Properties pane to the right of the canvas. You can modify the parameters in that pane to tune your model. You can set the compute resources for individual modules in the designer.
@@ -81,21 +91,24 @@ Use compute resources from your workspace to run your pipeline and host your dep
81
91
82
92
Compute targets are attached to your Machine Learning [workspace](concept-workspace.md). You manage your compute targets in your workspace in [Azure Machine Learning studio](https://ml.azure.com).
83
93
84
-
## Publish
94
+
## Deploy
85
95
86
-
Once you have a pipeline ready, you can publish it as a REST endpoint. A [PublishedPipeline](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.publishedpipeline?view=azure-ml-py) can be submitted without the Python code which constructed it.
96
+
To perform real-time inferencing, you must deploy a pipeline as a **real-time endpoint**. The real-time endpoint creates an interface between an external application and your scoring model. A call to a real-time endpoint returns prediction results to the application in real time. To make a call to a real-time endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
87
97
88
-
In addition, a PublishedPipeline can be used to resubmit a Pipeline with different PipelineParameter values and inputs.
98
+
Real-time endpoints must be deployed to an Azure Kubernetes Service cluster.
89
99
90
-
## Deployment
100
+
To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md).
91
101
92
-
Once your predictive model is ready, deploy it as a pipeline endpoint or real-time endpoint right from the designer.
102
+
## Publish
93
103
94
-
The pipeline endpoint is a PublishedPipeline, which you can submit a pipeline run with different PipelineParameter values and inputs for batch inference.
104
+
You can also publish a pipeline to a **pipeline endpoint**. Similar to a real-time endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real-time using a pipeline endpoint.
95
105
96
-
The real-time endpoint provides an interface between an application and your scoring model. An external application can communicate with the scoring model in real time. A call to a real-time endpoint returns prediction results to an external application. To make a call to a real-time endpoint, you pass an API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects.
106
+
Published pipelines are flexible, they can be used to train or retrain models, perform batch inferencing, process new data, and much more. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run.
107
+
108
+
A published pipeline runs on the compute resources you define in the pipeline draft for each module.
109
+
110
+
The designer creates the same [PublishedPipeline](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.publishedpipeline?view=azure-ml-py) object as the SDK.
97
111
98
-
To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md).
99
112
100
113
## Moving from the visual interface to the designer
0 commit comments