You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-move-workspace.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ Moving the workspace enables you to migrate the workspace and its contents as a
26
26
| Workspace contents | Moved with workspace |
27
27
| ----- |:-----:|
28
28
| Datasets | Yes |
29
-
| Experiment runs| Yes |
29
+
| Experiment jobs| Yes |
30
30
| Environments | Yes |
31
31
| Models and other assets stored in the workspace | Yes |
32
32
| Compute resources | No |
@@ -55,7 +55,7 @@ Moving the workspace enables you to migrate the workspace and its contents as a
55
55
56
56
| Resource provider | Why it's needed |
57
57
| ----- | ----- |
58
-
|__Microsoft.DocumentDB/databaseAccounts__| Azure CosmosDB instance that logs metadata for the workspace. |
58
+
|__Microsoft.DocumentDB/databaseAccounts__| Azure Cosmos DB instance that logs metadata for the workspace. |
59
59
|__Microsoft.Search/searchServices__| Azure Search provides indexing capabilities for the workspace. |
60
60
61
61
For information on registering resource providers, see [Resolve errors for resource provider registration](/azure/azure-resource-manager/templates/error-register-resource-provider).
@@ -69,7 +69,7 @@ Moving the workspace enables you to migrate the workspace and its contents as a
69
69
70
70
* Workspace move is not meant for replicating workspaces, or moving individual assets such as models or datasets from one workspace to another.
71
71
* Workspace move doesn't support migration across Azure regions or Azure Active Directory tenants.
72
-
* The workspace mustn't be in use during the move operation. Verify that all experiment runs, data profiling runs, and labeling projects have completed. Also verify that inference endpoints aren't being invoked.
72
+
* The workspace mustn't be in use during the move operation. Verify that all experiment jobs, data profiling jobs, and labeling projects have completed. Also verify that inference endpoints aren't being invoked.
73
73
* The workspace will become unavailable during the move.
74
74
* Before to the move, you must delete or detach computes and inference endpoints from the workspace.
75
75
@@ -81,7 +81,7 @@ Moving the workspace enables you to migrate the workspace and its contents as a
81
81
az account set -s origin-sub-id
82
82
```
83
83
84
-
2. Verify that the origin workspace isn't being used. Check that any experiment runs, data profiling runs, or labeling projects have completed. Also verify that inferencing endpoints aren't being invoked.
84
+
2. Verify that the origin workspace isn't being used. Check that any experiment jobs, data profiling jobs, or labeling projects have completed. Also verify that inferencing endpoints aren't being invoked.
85
85
86
86
3. Delete or detach any computes from the workspace, and delete any inferencing endpoints. Moving computes and endpoints isn't supported. Also note that the workspace will become unavailable during the move.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-responsible-ai-scorecard.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ Azure Machine Learning’s Responsible AI dashboard is designed for machine lear
22
22
- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
23
23
- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
24
24
25
-
One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
25
+
One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Job History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-retrain-designer.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ For this example, you will change the training data path from a fixed value to a
74
74
> - After detaching, you can delete the pipeline parameter in the **Setings** pane.
75
75
> - You can also add a pipeline parameter in the **Settings** pane, and then apply it on some component parameter.
76
76
77
-
1. Submit the pipeline run.
77
+
1. Submit the pipeline job.
78
78
79
79
## Publish a training pipeline
80
80
@@ -90,21 +90,21 @@ Publish a pipeline to a pipeline endpoint to easily reuse your pipelines in the
90
90
91
91
## Retrain your model
92
92
93
-
Now that you have a published training pipeline, you can use it to retrain your model on new data. You can submit runs from a pipeline endpoint from the studio workspace or programmatically.
93
+
Now that you have a published training pipeline, you can use it to retrain your model on new data. You can submit jobs from a pipeline endpoint from the studio workspace or programmatically.
94
94
95
-
### Submit runs by using the studio portal
95
+
### Submit jobs by using the studio portal
96
96
97
-
Use the following steps to submit a parameterized pipeline endpoint run from the studio portal:
97
+
Use the following steps to submit a parameterized pipeline endpoint job from the studio portal:
98
98
99
99
1. Go to the **Endpoints** page in your studio workspace.
100
100
1. Select the **Pipeline endpoints** tab. Then, select your pipeline endpoint.
101
101
1. Select the **Published pipelines** tab. Then, select the pipeline version that you want to run.
102
102
1. Select **Submit**.
103
-
1. In the setup dialog box, you can specify the parameters values for the run. For this example, update the data path to train your model using a non-US dataset.
103
+
1. In the setup dialog box, you can specify the parameters values for the job. For this example, update the data path to train your model using a non-US dataset.
104
104
105
-

105
+

106
106
107
-
### Submit runs by using code
107
+
### Submit jobs by using code
108
108
109
109
You can find the REST endpoint of a published pipeline in the overview panel. By calling the endpoint, you can retrain the published pipeline.
110
110
@@ -116,4 +116,4 @@ In this article, you learned how to create a parameterized training pipeline end
116
116
117
117
For a complete walkthrough of how you can deploy a model to make predictions, see the [designer tutorial](tutorial-designer-automobile-price-train-score.md) to train and deploy a regression model.
118
118
119
-
For how to publish and submit a run to pipeline endpoint using SDK, see [this article](how-to-deploy-pipelines.md).
119
+
For how to publish and submit a job to pipeline endpoint using SDK, see [this article](how-to-deploy-pipelines.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-run-batch-predictions-designer.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -101,9 +101,9 @@ Now you're ready to deploy the inference pipeline. This will deploy the pipeline
101
101
102
102
Now, you have a published pipeline with a dataset parameter. The pipeline will use the trained model created in the training pipeline to score the dataset you provide as a parameter.
103
103
104
-
### Submit a pipeline run
104
+
### Submit a pipeline job
105
105
106
-
In this section, you'll set up a manual pipeline run and alter the pipeline parameter to score new data.
106
+
In this section, you'll set up a manual pipeline job and alter the pipeline parameter to score new data.
107
107
108
108
1. After the deployment is complete, go to the **Endpoints** section.
109
109
@@ -119,7 +119,7 @@ In this section, you'll set up a manual pipeline run and alter the pipeline para
119
119
120
120
1. Select the pipeline you published.
121
121
122
-
The pipeline details page shows you a detailed run history and connection string information for your pipeline.
122
+
The pipeline details page shows you a detailed job history and connection string information for your pipeline.
123
123
124
124
1. Select **Submit** to create a manual run of the pipeline.
125
125
@@ -133,7 +133,7 @@ In this section, you'll set up a manual pipeline run and alter the pipeline para
133
133
134
134
You can find information on how to consume pipeline endpoints and published pipeline in the **Endpoints** section.
135
135
136
-
You can find the REST endpoint of a pipeline endpoint in the run overview panel. By calling the endpoint, you're consuming its default published pipeline.
136
+
You can find the REST endpoint of a pipeline endpoint in the job overview panel. By calling the endpoint, you're consuming its default published pipeline.
137
137
138
138
You can also consume a published pipeline in the **Published pipelines** page. Select a published pipeline and you can find the REST endpoint of it in the **Published pipeline overview** panel to the right of the graph.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-save-write-experiment-files.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,13 +18,13 @@ ms.date: 03/10/2020
18
18
19
19
In this article, you learn where to save input files, and where to write output files from your experiments to prevent storage limit errors and experiment latency.
20
20
21
-
When launching training runs on a [compute target](concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
21
+
When launching training jobs on a [compute target](concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
22
22
23
23
## Where to save input files
24
24
25
25
Before you can initiate an experiment on a compute target or your local machine, you must ensure that the necessary files are available to that compute target, such as dependency files and data files your code needs to run.
26
26
27
-
Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data using a [datastore](/python/api/azureml-core/azureml.data).
27
+
Azure Machine Learning jobs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory. Instead, access your data using a [datastore](/python/api/azureml-core/azureml.data).
28
28
29
29
The storage limit for experiment snapshots is 300 MB and/or 2000 files.
30
30
@@ -38,7 +38,7 @@ For this reason, we recommend:
38
38
39
39
### Storage limits of experiment snapshots
40
40
41
-
For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the run. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
41
+
For experiments, Azure Machine Learning automatically makes an experiment snapshot of your code based on the directory you suggest when you configure the job. This has a total limit of 300 MB and/or 2000 files. If you exceed this limit, you'll see the following error:
42
42
43
43
```Python
44
44
While attempting to take snapshot of .
@@ -56,18 +56,18 @@ Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new,
56
56
57
57
## Where to write files
58
58
59
-
Due to the isolation of training experiments, the changes to files that happen during runs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment run, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment run don't and shouldn't affect those in the second.
59
+
Due to the isolation of training experiments, the changes to files that happen during jobs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment job, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment job don't and shouldn't affect those in the second.
60
60
61
61
When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](how-to-train-with-datasets.md#where-to-write-training-output).
62
62
63
63
Otherwise, write files to the `./outputs` and/or `./logs` folder.
64
64
65
65
>[!Important]
66
-
> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to`./outputs` and`./logs` folders, the files will automatically upload to your run history, so that you have access to them once your run is finished.
66
+
> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to`./outputs` and`./logs` folders, the files will automatically upload to your job history, so that you have access to them once your job is finished.
67
67
68
-
***For output such as status messages or scoring results,** write files to the `./outputs` folder, so they are persisted as artifacts in run history. Be mindful of the number and size of files written to this folder, as latency may occur when the contents are uploaded to run history. If latency is a concern, writing files to a datastore is recommended.
68
+
***For output such as status messages or scoring results,** write files to the `./outputs` folder, so they are persisted as artifacts in job history. Be mindful of the number and size of files written to this folder, as latency may occur when the contents are uploaded to job history. If latency is a concern, writing files to a datastore is recommended.
69
69
70
-
***To save written file as logs in run history,** write files to `./logs` folder. The logs are uploaded in real time, so this method is suitable for streaming live updates from a remote run.
70
+
***To save written file as logs in job history,** write files to `./logs` folder. The logs are uploaded in real time, so this method is suitable for streaming live updates from a remote job.
0 commit comments