You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-auto-train-image-models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -522,7 +522,7 @@ When you've configured your AutoML Job to the desired settings, you can submit t
522
522
The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring fileand the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
523
523
524
524
> [!TIP]
525
-
> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-run-results) section.
525
+
> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
526
526
527
527
For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview)
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-train-with-ui.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,6 +37,7 @@ There are many ways to create a training job with Azure Machine Learning. You ca
37
37
* Or, you may enter the job creation from the left pane. Click **+New** and select **Job**.
38
38
[](media/how-to-train-with-ui/left-nav-entry.png)
39
39
40
+
40
41
These options will all take you to the job creation panel, which has a wizard for configuring and creating a training job.
41
42
42
43
## Select compute resources
@@ -77,7 +78,7 @@ After selecting a compute target, you need to specify the runtime environment fo
77
78
78
79
### Curated environments
79
80
80
-
Curated environments are Azure-defined collections of Python packages used in common ML workloads. Curated environments are available in your workspace by default. These environments are backed by cached Docker images, which reduce the run preparation overhead. The cards displayed in the "Curated environments" page show details of each environment. To learn more, see [curated environments in Azure Machine Learning](resource-curated-environments.md).
81
+
Curated environments are Azure-defined collections of Python packages used in common ML workloads. Curated environments are available in your workspace by default. These environments are backed by cached Docker images, which reduce the job preparation overhead. The cards displayed in the "Curated environments" page show details of each environment. To learn more, see [curated environments in Azure Machine Learning](resource-curated-environments.md).
To launch the job, choose **Create**. Once the job is created, Azure will show you the run details page, where you can monitor and manage your training job.
143
+
To launch the job, choose **Create**. Once the job is created, Azure will show you the job details page, where you can monitor and manage your training job.
To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between runs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
56
+
To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between jobs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
57
57
58
58
At the top of your Python file, import the `Schedule` and `ScheduleRecurrence` classes:
59
59
@@ -66,7 +66,7 @@ from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule
66
66
67
67
The `ScheduleRecurrence` constructor has a required `frequency` argument that must be one of the following strings: "Minute", "Hour", "Day", "Week", or "Month". It also requires an integer `interval` argument specifying how many of the `frequency` units should elapse between schedule starts. Optional arguments allow you to be more specific about starting times, as detailed in the [ScheduleRecurrence SDK docs](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedulerecurrence).
68
68
69
-
Create a `Schedule` that begins a run every 15 minutes:
69
+
Create a `Schedule` that begins a job every 15 minutes:
Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a run.
82
+
Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a job.
83
83
84
84
To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-none---workflow-provider-none---service-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument.
85
85
@@ -104,7 +104,7 @@ In your Web browser, navigate to Azure Machine Learning. From the **Endpoints**
104
104
105
105
:::image type="content" source="./media/how-to-trigger-published-pipeline/scheduled-pipelines.png" alt-text="Pipelines page of AML":::
106
106
107
-
In this page you can see summary information about all the pipelines in the Workspace: names, descriptions, status, and so forth. Drill in by clicking in your pipeline. On the resulting page, there are more details about your pipeline and you may drill down into individual runs.
107
+
In this page you can see summary information about all the pipelines in the Workspace: names, descriptions, status, and so forth. Drill in by clicking in your pipeline. On the resulting page, there are more details about your pipeline and you may drill down into individual jobs.
108
108
109
109
## Deactivate the pipeline
110
110
@@ -221,7 +221,7 @@ In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activ
221
221
222
222
## Next steps
223
223
224
-
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule runs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual runs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
224
+
In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule jobs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-troubleshoot-auto-ml.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,14 +148,14 @@ If the listed version is not a supported version:
148
148
149
149
## Data access
150
150
151
-
For automated ML runs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
151
+
For automated ML jobs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
152
152
153
153
Error message:
154
154
`Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.`
155
155
156
156
## Data schema
157
157
158
-
When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema forthe new experiment must match the schema of the data that was usedin the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](how-to-use-automated-ml-for-ml-models.md#edit-and-submit-runs-preview).
158
+
When you try to create a new automated ML experiment via the **Edit and submit** button in the Azure Machine Learning studio, the data schema forthe new experiment must match the schema of the data that was usedin the original experiment. Otherwise, an error message similar to the following results. Learn more about how to [edit and submit experiments from the studio UI](how-to-use-automated-ml-for-ml-models.md#edit-and-submit-jobs-preview).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ Because of the distributed nature of batch scoring jobs, there are logs from sev
65
65
66
66
-`~/logs/job_progress_overview.txt`: This file provides high-level information about the number of mini-batches (also known as tasks) created so far and the number of mini-batches processed so far. As the mini-batches end, the log records the results of the job. If the job failed, it will show the error message and where to start the troubleshooting.
67
67
68
-
-`~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the run result.
68
+
-`~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. This log provides information on task creation, progress monitoring, the job result.
69
69
70
70
For a concise understanding of errors in your script there is:
In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many runs are created and each run creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance.
17
+
In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many jobs are created and each job creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance.
18
18
19
19
For example, automated ML generates the following charts based on experiment type.
20
20
@@ -35,18 +35,18 @@ For example, automated ML generates the following charts based on experiment typ
35
35
- The [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md) (no code required)
36
36
- The [Azure Machine Learning Python SDK](how-to-configure-auto-train.md)
37
37
38
-
## View run results
38
+
## View job results
39
39
40
-
After your automated ML experiment completes, a history of the runs can be found via:
40
+
After your automated ML experiment completes, a history of the jobs can be found via:
41
41
- A browser with [Azure Machine Learning studio](overview-what-is-machine-learning-studio.md)
42
-
- A Jupyter notebook using the [RunDetails Jupyter widget](/python/api/azureml-widgets/azureml.widgets.rundetails)
42
+
- A Jupyter notebook using the [JobDetails Jupyter widget](/python/api/azureml-widgets/azureml.widgets.rundetails)
43
43
44
44
The following steps and video, show you how to view the run history and model evaluation metrics and charts in the studio:
45
45
46
46
1.[Sign into the studio](https://ml.azure.com/) and navigate to your workspace.
47
47
1. In the left menu, select **Experiments**.
48
48
1. Select your experiment from the list of experiments.
49
-
1. In the table at the bottom of the page, select an automated ML run.
49
+
1. In the table at the bottom of the page, select an automated ML job.
50
50
1. In the **Models** tab, select the **Algorithm name** for the model you want to evaluate.
51
51
1. In the **Metrics** tab, use the checkboxes on the left to view metrics and charts.
0 commit comments