|
1 | 1 | ---
|
2 |
| -title: What is the Azure Machine Learning designer(v1)? |
| 2 | +title: What is Designer (v1)? |
3 | 3 | titleSuffix: Azure Machine Learning
|
4 |
| -description: Learn what the Azure Machine Learning designer is and what tasks you can use it for. The drag-and-drop UI enables model training and deployment. |
| 4 | +description: Learn about how the drag-and-drop Designer (v1) UI in Azure Machine Learning studio enables model training and deployment tasks. |
5 | 5 | services: machine-learning
|
6 | 6 | ms.service: machine-learning
|
7 | 7 | ms.subservice: core
|
8 | 8 | ms.topic: conceptual
|
9 | 9 | ms.author: lagayhar
|
10 | 10 | ms.reviewer: lagayhar
|
11 | 11 | author: lgayhardt
|
12 |
| -ms.date: 05/25/2023 |
| 12 | +ms.date: 05/22/2024 |
13 | 13 | ms.custom: UpdateFrequency5, designer, training
|
14 | 14 | ---
|
15 | 15 |
|
16 |
| -# What is Azure Machine Learning designer (v1)? |
| 16 | +# What is Designer (v1) in Azure Machine Learning? |
17 | 17 |
|
18 |
| -Azure Machine Learning designer is a drag-and-drop interface used to train and deploy models in Azure Machine Learning. This article describes the tasks you can do in the designer. |
| 18 | +The Azure Machine Learning designer is a drag-and-drop interface used to train and deploy models in Azure Machine Learning studio. This article describes the tasks you can do in the designer. |
19 | 19 |
|
20 |
| ->[!Note] |
21 |
| -> Designer supports two types of components, classic prebuilt components(v1) and custom components(v2). These two types of components are NOT compatible. |
22 |
| -> |
23 |
| ->Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added. |
24 |
| -> |
25 |
| ->Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces. |
26 |
| -> |
27 |
| ->For new projects, we highly suggest you use custom component, which is compatible with AzureML V2 and will keep receiving new updates. |
28 |
| -> |
29 |
| ->This article applies to classic prebuilt components and not compatible with CLI v2 and SDK v2. |
| 20 | +> [!IMPORTANT] |
| 21 | +> Designer in Azure Machine Learning supports two types of pipelines that use classic prebuilt (v1) or custom (v2) components. The two component types aren't compatible within pipelines, and designer v1 isn't compatible with CLI v2 and SDK v2. **This article applies to pipelines that use classic prebuilt (v1) components.** |
| 22 | +> |
| 23 | +> - **Classic prebuilt components (v1)** include typical data processing and machine learning tasks like regression and classification. Azure Machine Learning continues to support the existing classic prebuilt components, but no new prebuilt components are being added. |
| 24 | +> |
| 25 | +> - **Custom components (v2)** let you wrap your own code as components, enabling sharing across workspaces and seamless authoring across Azure Machine Learning studio, CLI v2, and SDK v2 interfaces. It's best to use custom components for new projects, because they're compatible with Azure Machine Learning v2 and continue to receive new updates. For more information about custom components and Designer (v2), see [Azure Machine Learning designer (v2)](../concept-designer.md?view=azureml-api-2&preserve-view=true). |
30 | 26 |
|
31 |
| - - To get started with the designer, see [Tutorial: Train a no-code regression model](tutorial-designer-automobile-price-train-score.md). |
32 |
| - - To learn about the components available in the designer, see the [Algorithm and component reference](../algorithm-module-reference/module-reference.md). |
| 27 | +The following animated GIF shows how you can build a pipeline visually in Designer by dragging and dropping assets and connecting them. |
33 | 28 |
|
34 | 29 | :::image type="content" source="../media/concept-designer/designer-drag-and-drop.gif" alt-text="GIF of a building a pipeline in the designer." lightbox= "../media/concept-designer/designer-drag-and-drop.gif":::
|
35 | 30 |
|
| 31 | +To learn about the components available in the designer, see the [Algorithm and component reference](../algorithm-module-reference/module-reference.md). To get started with the designer, see [Tutorial: Train a no-code regression model](tutorial-designer-automobile-price-train-score.md). |
| 32 | + |
| 33 | +## Model training and deployment |
| 34 | + |
36 | 35 | The designer uses your Azure Machine Learning [workspace](../concept-workspace.md) to organize shared resources such as:
|
37 | 36 |
|
38 |
| -+ [Pipelines](#pipeline) |
39 |
| -+ [Data](#data) |
40 |
| -+ [Compute resources](#compute) |
41 |
| -+ [Registered models](concept-azure-machine-learning-architecture.md#models) |
42 |
| -+ [Published pipelines](#publish) |
43 |
| -+ [Real-time endpoints](#deploy) |
| 37 | +- [Pipelines](#pipelines) |
| 38 | +- [Data](#data) |
| 39 | +- [Compute resources](#compute) |
| 40 | +- [Registered models](concept-azure-machine-learning-architecture.md#models) |
| 41 | +- [Published pipeline jobs](#publish) |
| 42 | +- [Real-time endpoints](#deploy) |
44 | 43 |
|
45 |
| -## Model training and deployment |
| 44 | +The following diagram illustrates how you can use the designer to build an end-to-end machine learning workflow. You can train, test, and deploy models, all in the designer interface. |
46 | 45 |
|
47 |
| -Use a visual canvas to build an end-to-end machine learning workflow. Train, test, and deploy models all in the designer: |
| 46 | +:::image type="content" source="../media/concept-designer/designer-workflow-diagram.png" alt-text="Workflow diagram for training, batch inference, and real-time inference in the designer." border="false"::: |
48 | 47 |
|
49 |
| -+ Drag-and-drop [data assets](#data) and [components](#component) onto the canvas. |
50 |
| -+ Connect the components to create a [pipeline draft](#pipeline-draft). |
51 |
| -+ Submit a [pipeline run](#pipeline-job) using the compute resources in your Azure Machine Learning workspace. |
52 |
| -+ Convert your **training pipelines** to **inference pipelines**. |
53 |
| -+ [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and data assets. |
54 |
| - + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and data assets. |
55 |
| - + Publish a **batch inference pipeline** to make predictions on new data by using a previously trained model. |
56 |
| -+ [Deploy](#deploy) a **real-time inference pipeline** to an online endpoint to make predictions on new data in real time. |
| 48 | +- Drag and drop [data assets](#data) and [components](#components) onto the designer visual canvas, and connect the components to create a [pipeline draft](#pipeline-drafts). |
| 49 | +- Submit a [pipeline job](#pipeline-jobs) that uses the compute resources in your Azure Machine Learning workspace. |
| 50 | +- Convert your **training pipelines** to **inference pipelines**. |
| 51 | +- [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit new pipelines that run with different parameters and data assets. |
| 52 | + - Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and data assets. |
| 53 | + - Publish a **batch inference pipeline** to make predictions on new data by using a previously trained model. |
| 54 | +- [Deploy](#deploy) a **real-time inference pipeline** to an online endpoint to make predictions on new data in real time. |
57 | 55 |
|
58 |
| -:::image type="content" source="../media/concept-designer/designer-workflow-diagram.png" alt-text="Workflow diagram for training, batch inference, and real-time inference in the designer."::: |
| 56 | +## Data |
59 | 57 |
|
60 |
| -## Pipeline |
| 58 | +A machine learning data asset makes it easy to access and work with your data. The designer includes several [sample data assets](samples-designer.md#datasets) for you to experiment with. You can [register](how-to-create-register-datasets.md) more data assets as you need them. |
61 | 59 |
|
62 |
| -A [pipeline](../concept-ml-pipelines.md) consists of data assets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects. |
| 60 | +## Components |
63 | 61 |
|
64 |
| -### Pipeline draft |
| 62 | +A component is an algorithm that you can run on your data. The designer has several components ranging from data ingress functions to training, scoring, and validation processes. |
65 | 63 |
|
66 |
| -As you edit a pipeline in the designer, your progress is saved as a **pipeline draft**. You can edit a pipeline draft at any point by adding or removing components, configuring compute targets, creating parameters, and so on. |
| 64 | +A component can have parameters that you use to configure the component's internal algorithms. When you select a component on the canvas, the component's parameters and other settings display in a properties pane at the right of the canvas. You can modify the parameters and set the compute resources for individual components in that pane. |
67 | 65 |
|
68 |
| -A valid pipeline has these characteristics: |
| 66 | +:::image type="content" source="../media/concept-designer/properties.png" alt-text="Screenshot showing the component properties."::: |
69 | 67 |
|
70 |
| -* Data assets can only connect to components. |
71 |
| -* components can only connect to either data assets or other components. |
72 |
| -* All input ports for components must have some connection to the data flow. |
73 |
| -* All required parameters for each component must be set. |
| 68 | +For more information about the library of available machine learning algorithms, see the [Algorithm and component reference](../component-reference/component-reference.md). For help with choosing an algorithm, see the [Azure Machine Learning Algorithm Cheat Sheet](algorithm-cheat-sheet.md). |
74 | 69 |
|
75 |
| -When you're ready to run your pipeline draft, you submit a pipeline job. |
| 70 | +## Pipelines |
76 | 71 |
|
77 |
| -### Pipeline job |
| 72 | +A [pipeline](../concept-ml-pipelines.md) consists of data assets and analytical components that you connect. Pipelines help you reuse your work and organize your projects. |
78 | 73 |
|
79 |
| -Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline job**. You can go back to any pipeline job to inspect it for troubleshooting or auditing. **Clone** a pipeline job to create a new pipeline draft for you to edit. |
| 74 | +Pipelines have many uses. You can create pipelines that: |
80 | 75 |
|
81 |
| -Pipeline jobs are grouped into experiments to organize job history. You can set the experiment for every pipeline job. |
| 76 | +- Train a single model. |
| 77 | +- Train multiple models. |
| 78 | +- Make predictions in real time or in batch. |
| 79 | +- Clean data only. |
82 | 80 |
|
83 |
| -## Data |
| 81 | +### Pipeline drafts |
84 | 82 |
|
85 |
| -A machine learning data asset makes it easy to access and work with your data. Several [sample data assets](samples-designer.md#datasets) are included in the designer for you to experiment with. You can [register](how-to-create-register-datasets.md) more data assets as you need them. |
| 83 | +As you edit a pipeline in the designer, your progress is saved as a *pipeline draft*. You can edit a pipeline draft at any point by adding or removing components, configuring compute targets, or setting parameters. |
86 | 84 |
|
87 |
| -## Component |
| 85 | +A valid pipeline has the following characteristics: |
88 | 86 |
|
89 |
| -A component is an algorithm that you can perform on your data. The designer has several components ranging from data ingress functions to training, scoring, and validation processes. |
| 87 | +- Data assets can connect only to components. |
| 88 | +- Components can connect only to data assets or to other components. |
| 89 | +- All input ports for components must have some connection to the data flow. |
| 90 | +- All required parameters for each component must be set. |
90 | 91 |
|
91 |
| -A component may have a set of parameters that you can use to configure the component's internal algorithms. When you select a component on the canvas, the component's parameters are displayed in the Properties pane to the right of the canvas. You can modify the parameters in that pane to tune your model. You can set the compute resources for individual components in the designer. |
| 92 | +When you're ready to run your pipeline draft, you save the pipeline and submit a pipeline job. |
92 | 93 |
|
93 |
| -:::image type="content" source="../media/concept-designer/properties.png" alt-text="Screenshot showing the component properties."::: |
| 94 | +### Pipeline jobs |
94 | 95 |
|
| 96 | +Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a *pipeline job*. Pipeline jobs are grouped into *experiments* to organize job history. |
95 | 97 |
|
96 |
| -For some help navigating through the library of machine learning algorithms available, see [Algorithm & component reference overview](../component-reference/component-reference.md). For help with choosing an algorithm, see the [Azure Machine Learning Algorithm Cheat Sheet](algorithm-cheat-sheet.md). |
| 98 | +You can go back to any pipeline job to inspect it for troubleshooting or auditing. **Clone** a pipeline job to create a new pipeline draft to edit. |
97 | 99 |
|
98 | 100 | ## <a name="compute"></a> Compute resources
|
99 | 101 |
|
100 |
| -Use compute resources from your workspace to run your pipeline and host your deployed models as online endpoints or pipeline endpoints (for batch inference). The supported compute targets are: |
| 102 | +Compute targets are attached to your [Azure Machine Learning workspace](../concept-workspace.md) in [Azure Machine Learning studio](https://ml.azure.com). Use compute resources from your workspace to run your pipeline and host your deployed models as online endpoints or as pipeline endpoints for batch inference. The supported compute targets are as follows: |
101 | 103 |
|
102 | 104 | | Compute target | Training | Deployment |
|
103 | 105 | | ---- |:----:|:----:|
|
104 | 106 | | Azure Machine Learning compute | ✓ | |
|
105 |
| -| Azure Kubernetes Service | | ✓ | |
106 |
| - |
107 |
| -Compute targets are attached to your [Azure Machine Learning workspace](../concept-workspace.md). You manage your compute targets in your workspace in the [Azure Machine Learning studio](https://ml.azure.com). |
| 107 | +| Azure Kubernetes Service (AKS) | | ✓ | |
108 | 108 |
|
109 | 109 | ## Deploy
|
110 | 110 |
|
111 |
| -To perform real-time inferencing, you must deploy a pipeline as an [online endpoint](../concept-endpoints-online.md). The online endpoint creates an interface between an external application and your scoring model. A call to an online endpoint returns prediction results to the application in real time. To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. The endpoint is based on REST, a popular architecture choice for web programming projects. |
| 111 | +To do real-time inferencing, you must deploy a pipeline as an [online endpoint](../concept-endpoints-online.md). The online endpoint creates an interface between an external application and your scoring model. The endpoint is based on REST, a popular architecture choice for web programming projects. A call to an online endpoint returns prediction results to the application in real time. |
112 | 112 |
|
113 |
| -Online endpoints must be deployed to an Azure Kubernetes Service cluster. |
114 |
| - |
115 |
| -To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md). |
| 113 | +To make a call to an online endpoint, you pass the API key that was created when you deployed the endpoint. Online endpoints must be deployed to an AKS cluster. To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md). |
116 | 114 |
|
117 | 115 | ## Publish
|
118 | 116 |
|
119 |
| -You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline jobs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint. |
120 |
| - |
121 |
| -Published pipelines are flexible, they can be used to train or retrain models, [perform batch inferencing](how-to-run-batch-predictions-designer.md), process new data, and much more. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run. |
| 117 | +You can also publish a pipeline to a *pipeline endpoint*. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline jobs from external applications by using REST calls. However, you can't send or receive data in real time by using a pipeline endpoint. |
122 | 118 |
|
123 |
| -A published pipeline runs on the compute resources you define in the pipeline draft for each component. |
| 119 | +Published pipeline endpoints are flexible and can be used to train or retrain models, [do batch inferencing](how-to-run-batch-predictions-designer.md), or process new data. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run. |
124 | 120 |
|
125 |
| -The designer creates the same [PublishedPipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.publishedpipeline) object as the SDK. |
| 121 | +A published pipeline runs on the compute resources you define in the pipeline draft for each component. The designer creates the same [PublishedPipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.publishedpipeline) object as the SDK. |
126 | 122 |
|
127 |
| -## Next steps |
| 123 | +## Related content |
128 | 124 |
|
129 |
| -* Learn the fundamentals of predictive analytics and machine learning with [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md) |
130 |
| -* Learn how to modify existing [designer samples](samples-designer.md) to adapt them to your needs. |
| 125 | +- Learn the fundamentals of predictive analytics and machine learning with [Tutorial: Predict automobile price with the designer](tutorial-designer-automobile-price-train-score.md). |
| 126 | +- Learn how to modify existing [designer samples](samples-designer.md) to adapt them to your needs. |
0 commit comments