You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-first-experiment-automated-ml.md
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,13 +80,13 @@ Complete the following experiment set-up and run steps by using the Azure Machin
80
80
81
81
1. In **Training method**, select **Train automatically**, then select **Start configuring job**.
82
82
83
-
1. In **Basic settings**, select **Create new**, then for **Experiment name**, enter *my-1st-automl-experiment*.\
83
+
1. In **Basic settings**, select **Create new**, then for **Experiment name**, enter *my-1st-automl-experiment*.
84
84
85
85
1. Select **Next** to load your dataset.
86
86
87
87
## Create and load a dataset as a data asset
88
88
89
-
Before you configure your experiment, upload your data file to your workspace in the form of an Azure Machine Learning data asset. For this tutorial, you can think of a data asset as your dataset for the AutoML job. Doing so allows you to ensure that your data is formatted appropriately for your experiment.
89
+
Before you configure your experiment, upload the data file to your workspace in the form of an Azure Machine Learning data asset. For this tutorial, you can think of a data asset as your dataset for the Automated ML job. Doing so allows you to ensure that your data is formatted appropriately for your experiment.
90
90
91
91
1. In **Task type & data**, for **Select task type**, choose **Classification**.
92
92
@@ -102,7 +102,7 @@ Before you configure your experiment, upload your data file to your workspace in
102
102
1. Choose the *bankmarketing_train.csv* file on your local computer. You downloaded this file as a [prerequisite](https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv).
103
103
1. Select **Next**.
104
104
105
-
When the upload finishes, the **Data preview** area is prepopulated based on the file type.
105
+
When the upload finishes, the **Data preview** area is populated based on the file type.
106
106
107
107
1. In the **Settings** form, review the values for your data. Then select **Next**.
108
108
@@ -148,7 +148,7 @@ After you load and configure your data, you can set up your experiment. This set
148
148
1. For **Validation type**, select **k-fold cross-validation**.
149
149
1. For **Number of cross validations**, select **2**.
150
150
151
-
1. Select **Next**
151
+
1. Select **Next**.
152
152
1. Select **compute cluster** as your compute type.
153
153
154
154
A compute target is a local or cloud-based resource environment used to run your training script or host your service deployment. For this experiment, you can either try a cloud-based serverless compute (preview) or create your own cloud-based compute.
@@ -187,21 +187,18 @@ A compute target is a local or cloud-based resource environment used to run your
187
187
1. Select **Submit training job** to run the experiment. The **Overview** screen opens with the **Status** at the top as the experiment preparation begins. This status updates as the experiment progresses. Notifications also appear in the studio to inform you of the status of your experiment.
188
188
189
189
>[!IMPORTANT]
190
-
> Preparation takes **10-15 minutes** to prepare the experiment run.
191
-
> Once running, it takes **2-3 minutes more for each iteration**.
190
+
> Preparation takes **10-15 minutes** to prepare the experiment run. Once running, it takes **2-3 minutes more for each iteration**.
192
191
>
193
-
> In production, you'd likely walk away for a bit. But for this tutorial, we suggest you start exploring the tested algorithms on the **Models** tab as they complete while the others are still running.
192
+
> In production, you'd likely walk away for a bit. But for this tutorial, you can start exploring the tested algorithms on the **Models** tab as they complete while the others continue to run.
194
193
195
-
## Explore models
194
+
## Explore models
196
195
197
196
Navigate to the **Models + child jobs** tab to see the algorithms (models) tested. By default, the job orders the models by metric score as they complete. For this tutorial, the model that scores the highest based on the chosen **AUCWeighted** metric is at the top of the list.
198
197
199
198
While you wait for all of the experiment models to finish, select the **Algorithm name** of a completed model to explore its performance details. Select the **Overview** and the **Metrics** tabs for information about the job.
200
199
201
200
The following animation views the selected model's properties, metrics, and performance charts.
:::image type="content" source="./media/tutorial-first-experiment-automated-ml/run-detail.gif" alt-text="Animation that shows different views available for a child job." lightbox="./media/tutorial-first-experiment-automated-ml/run-detail.gif":::
206
203
207
204
## View model explanations
@@ -219,7 +216,7 @@ To generate model explanations:
219
216
1. Select your compute type and then select the instance or cluster: **automl-compute** that you created previously. This compute starts a child job to generate the model explanations.
220
217
1. Select **Create**. A green success message appears.
221
218
222
-
>[!NOTE]
219
+
>[!NOTE]
223
220
> The explainability job takes about 2-5 minutes to complete.
224
221
225
222
1. Select **Explanations (preview)**. This tab populates after the explainability run completes.
0 commit comments