Skip to content

Commit 2b92427

Browse files
authored
Merge pull request #108292 from likebupt/update-0319
correct typo
2 parents f1715a2 + 9927837 commit 2b92427

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

articles/machine-learning/algorithm-module-reference/edit-metadata.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.date: 02/11/2020
1515

1616
This article describes a module included in Azure Machine Learning designer (preview).
1717

18-
Use the Edit Data module to change metadata that's associated with columns in a dataset. The value and data type of the dataset will change after use of the Edit Metadata module.
18+
Use the Edit Metadata module to change metadata that's associated with columns in a dataset. The value and data type of the dataset will change after use of the Edit Metadata module.
1919

2020
Typical metadata changes might include:
2121

articles/machine-learning/algorithm-module-reference/tune-model-hyperparameters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.date: 02/11/2020
1515

1616
This article describes how to use the Tune Model Hyperparameters module in Azure Machine Learning designer (preview). The goal is to determine the optimum hyperparameters for a machine learning model. The module builds and tests multiple models by using different combinations of settings. It compares metrics over all models to get the combinations of settings.
1717

18-
The terms *parameter* and *hyperparameter* can be confusing. The model's *parameters* are what you set in the properties pane. Basically, this module performs a *parameter sweep* over the specified parameter settings. It learns an optimal set of _hyperparameters_, which might be different for each specific decision tree, dataset, or regression method. The process of finding the optimal configuration is sometimes called *tuning*.
18+
The terms *parameter* and *hyperparameter* can be confusing. The model's *parameters* are what you set in the right pane of the module. Basically, this module performs a *parameter sweep* over the specified parameter settings. It learns an optimal set of _hyperparameters_, which might be different for each specific decision tree, dataset, or regression method. The process of finding the optimal configuration is sometimes called *tuning*.
1919

2020
The module supports the following method for finding the optimum settings for a model: *integrated train and tune.* In this method, you configure a set of parameters to use. You then let the module iterate over multiple combinations. The module measures accuracy until it finds a "best" model. With most learner modules, you can choose which parameters should be changed during the training process, and which should remain fixed.
2121

articles/machine-learning/how-to-debug-pipelines.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The following sections provide an overview of common pitfalls when building pipe
2828

2929
One of the most common failures in a pipeline is that an attached script (data cleansing script, scoring script, etc.) is not running as intended, or contains runtime errors in the remote compute context that are difficult to debug in your workspace in the Azure Machine Learning studio.
3030

31-
Pipelines themselves cannot be run locally, but running the scripts in isolation on your local machine allows you to debug faster because you dont have to wait for the compute and environment build process. Some development work is required to do this:
31+
Pipelines themselves cannot be run locally, but running the scripts in isolation on your local machine allows you to debug faster because you don't have to wait for the compute and environment build process. Some development work is required to do this:
3232

3333
* If your data is in a cloud datastore, you will need to download data and make it available to your script. Using a small sample of your data is a good way to cut down on runtime and quickly get feedback on script behavior
3434
* If you are attempting to simulate an intermediate pipeline step, you may need to manually build the object types that the particular script is expecting from the prior step
@@ -133,7 +133,7 @@ For pipelines created in the designer, you can find the **log files** on either
133133
When you submit a pipeline run and stay in the authoring page, you can find the log files generated for each module.
134134

135135
1. Select any module in the authoring canvas.
136-
1. In the properties pane, go to the **Logs** tab.
136+
1. In the right pane of the module, go to the **Outputs+ogs** tab.
137137
1. Select the log file `70_driver_log.txt`
138138

139139
![Authoring page module logs](./media/how-to-debug-pipelines/pipelinerun-05.png)
@@ -145,7 +145,7 @@ You can also find the log files of specific runs in the pipeline run detail page
145145
1. Select a pipeline run created in the designer.
146146
![Pipeline run page](./media/how-to-debug-pipelines/pipelinerun-04.png)
147147
1. Select any module in the preview pane.
148-
1. In the properties pane, go to the **Logs** tab.
148+
1. In the right pane of the module, go to the **Outputs+ogs** tab.
149149
1. Select the log file `70_driver_log.txt`
150150

151151
## Debug and troubleshoot in Application Insights

0 commit comments

Comments
 (0)