You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/service/tutorial-first-experiment-automated-ml.md
+12-11Lines changed: 12 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -101,12 +101,8 @@ You'll see the **Getting started** screen, since this is your first experiment w
101
101
Column headers| All files have same headers
102
102
Skip rows | None
103
103
104
-
>[!NOTE]
105
-
> If any of the settings on this form are updated the preview will update accordingly.
106
-
107
104
Select **Next**.
108
105
109
-
110
106
1. The **Schema** form allows for further configuration of your data for this experiment. For this example, select the toggle switch for the **day_of_week** feature, so as to not include it for this experiment. Select **Done**, to complete the file upload and creation of the dataset for your experiment.
@@ -127,7 +123,7 @@ You'll see the **Getting started** screen, since this is your first experiment w
127
123
Concurrency| Select **5** for the number of max concurrent iterations.
128
124
129
125
>[!NOTE]
130
-
> For this experiment, you don't set a metric or max cores per iterations threshold. You also don't block algorithms from being tested.
126
+
> For this experiment, you don't set a metric score or max cores per iterations threshold. You also don't block algorithms from being tested.
131
127
132
128
1. Select **Start** to run the experiment. A screen appears with a status message as the experiment preparation begins.
133
129
@@ -137,20 +133,25 @@ You'll see the **Getting started** screen, since this is your first experiment w
137
133
>
138
134
> In production, you'd likely walk away for a bit. But for this tutorial, we suggest you start exploring the iteration results as they complete while the others are still running.
139
135
140
-
## View experiment details
136
+
## Explore iteration results
141
137
142
-
As the experiment progresses, the screen updates the **Iteration chart** and **Iteration list** with the different iterations (models) that are run. The iterations list is in order by metric score. By default, the model that scores the highest based on our**AUC_weighted** metric is at the top of the list.
138
+
As the experiment progresses, the screen updates the **Iteration chart** and **Iteration list** with the different iterations (models) created as they complete, and orders them by metric score. By default, the model that scores the highest based on the chosen**AUC_weighted** metric is at the top of the list.
While you wait for all of the experiment iterations to finish, select the **Name** of a completed iteration to explore its performance details.
141
+
142
+
The following shows the charts and run metrics generated for each iteration such as, a precision-recall curve, confusion matrix, weighted accuracy scores, etc.
By using automated machine learning in the workspace landing page, you can deploy the best model as a web service in a few steps. Deployment is the integration of the model so it can predict on new data and identify potential areas of opportunity.
149
-
For this experiment, deployment to a web service means that the financial institution now has an iterative and scalable web solution for identifying potential fixed term deposit customers.
148
+
Automated machine learning in the workspace landing page allows you to deploy the best model as a web service in a few steps. Deployment is the integration of the model so it can predict on new data and identify potential areas of opportunity. For this experiment, deployment to a web service means that the financial institution now has an iterative and scalable web solution for identifying potential fixed term deposit customers.
149
+
150
+
Once the run is complete, navigate back to the **Iteration chart** and **Iterations list** detail page.
150
151
151
152
In this experiment context, **VotingEnsemble** is considered the best model, based on the **AUC_weighted** metric. We deploy this model, but be advised, deployment takes about 20 minutes to complete. The deployment process entails several steps including registering the model, generating resources, and configuring them for the web service.
152
153
153
-
1.On the **Run Detail** page, select the **Deploy Best Model** button in the top-right corner.
154
+
1.Select the **Deploy Best Model** button in the top-right corner.
154
155
155
156
1. Populate the **Deploy Best Model** pane as follows:
0 commit comments