Skip to content

Commit aa94f6f

Browse files
author
craigcaseyMSFT
committed
fixes after review
1 parent cac2b96 commit aa94f6f

File tree

4 files changed

+19
-19
lines changed

4 files changed

+19
-19
lines changed

articles/machine-learning/service/how-to-machine-learning-interpretability-aml.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ In this article, you learn to use the interpretability package of the Azure Mach
2121

2222
* Interpret machine learning models trained both locally and on remote compute resources.
2323
* Store local and global explanations on Azure Run History.
24-
* View interpretability visualizations in [Azure Machine Learning Studio (classic)](https://ml.azure.com).
24+
* View interpretability visualizations in [Azure Machine Learning studio](https://ml.azure.com).
2525
* Deploy a scoring explainer with your model.
2626

2727
For more information, see [Model interpretability in Azure Machine Learning](how-to-machine-learning-interpretability.md).
@@ -337,9 +337,9 @@ from azureml.contrib.interpret.visualize import ExplanationDashboard
337337
ExplanationDashboard(global_explanation, model, x_test)
338338
```
339339

340-
### Visualization in Azure Machine Learning Studio (classic)
340+
### Visualization in Azure Machine Learning studio
341341

342-
If you complete the [remote interpretability](#interpretability-for-remote-runs) steps, you can view the visualization dashboard in [Azure Machine Learning Studio (classic)](https://ml.azure.com). This dashboard is a simpler version of the visualization dashboard explained above. It only supports two tabs:
342+
If you complete the [remote interpretability](#interpretability-for-remote-runs) steps, you can view the visualization dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the visualization dashboard explained above. It only supports two tabs:
343343

344344
|Plot|Description|
345345
|----|-----------|
@@ -348,7 +348,7 @@ If you complete the [remote interpretability](#interpretability-for-remote-runs)
348348

349349
If both global and local explanations are available, data populates both tabs. If only a global explanation is available, the Summary Importance tab is disabled.
350350

351-
Follow one of these paths to access the visualization dashboard in Azure Machine Learning Studio (classic):
351+
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
352352

353353
* **Experiments** pane (Preview)
354354
1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.

articles/machine-learning/service/how-to-machine-learning-interpretability-automl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ if service.state == 'Healthy':
220220

221221
### Visualize to discover patterns in data and explanations at training time
222222

223-
You can visualize the feature importance chart in your workspace in [Azure Machine Learning Studio (classic)](https://ml.azure.com). After your automated ML run is complete, select **View model details** to view a specific run. Select the **Explanations** tab to see the explanation visualization dashboard.
223+
You can visualize the feature importance chart in your workspace in [Azure Machine Learning studio](https://ml.azure.com). After your automated ML run is complete, select **View model details** to view a specific run. Select the **Explanations** tab to see the explanation visualization dashboard.
224224

225225
[![Machine Learning Interpretability Architecture](./media/machine-learning-interpretability-explainability/automl-explainability.png)](./media/machine-learning-interpretability-explainability/automl-explainability.png#lightbox)
226226

articles/machine-learning/service/how-to-monitor-datasets.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ With Azure Machine Learning dataset monitors, you can:
2929
Metrics and insights are available through the [Azure Application Insights](https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview) resource associated with the Azure Machine Learning workspace.
3030

3131
> [!Important]
32-
> Please note that monitoring data drift with the SDK is available in all editions, while monitoring data drift through the Studio (classic) on the web is Enterprise edition only.
32+
> Please note that monitoring data drift with the SDK is available in all editions, while monitoring data drift through the studio on the web is Enterprise edition only.
3333
3434
## Prerequisites
3535

@@ -70,7 +70,7 @@ Using Azure Machine Learning, data drift is monitored through datasets. To monit
7070

7171
### Set the `timeseries` trait in the target dataset
7272

73-
The target dataset needs to have the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. This can be done through the Python SDK or Azure Machine Learning Studio (classic). A column representing a "fine grain" timestamp must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', you can create a virtual column through the path pattern setting and set it as the "coarse grain" timestamp to improve the importance of time series functionality.
73+
The target dataset needs to have the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. This can be done through the Python SDK or Azure Machine Learning studio. A column representing a "fine grain" timestamp must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', you can create a virtual column through the path pattern setting and set it as the "coarse grain" timestamp to improve the importance of time series functionality.
7474

7575
#### Python SDK
7676

@@ -103,10 +103,10 @@ dset = dset.register(ws, 'target')
103103

104104
For a full example of using the `timeseries` trait of datasets, see the [example notebook](https://aka.ms/azureml-tsd-notebook) or the [datasets SDK documentation](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#with-timestamp-columns-fine-grain-timestamp--coarse-grain-timestamp-none--validate-false-).
105105

106-
#### Azure Machine Learning Studio (classic)
106+
#### Azure Machine Learning studio
107107
[!INCLUDE [applies-to-skus](../../../includes/aml-applies-to-enterprise-sku-inline.md)]
108108

109-
If you create your dataset using Azure Machine Learning Studio (classic), ensure the path to your data contains timestamp information, include all subfolders with data, and set the partition format.
109+
If you create your dataset using Azure Machine Learning studio, ensure the path to your data contains timestamp information, include all subfolders with data, and set the partition format.
110110

111111
In the following example, all data under the subfolder *NoaaIsdFlorida/2019* is taken, and the partition format specifies the timestamp's year, month, and day.
112112

@@ -157,14 +157,14 @@ These settings are for running a backfill on past data for data drift metrics.
157157

158158
## Create dataset monitors
159159

160-
Create dataset monitors to detect and alert to data drift on a new dataset with Azure Machine Learning Studio (classic) or the Python SDK.
160+
Create dataset monitors to detect and alert to data drift on a new dataset with Azure Machine Learning studio or the Python SDK.
161161

162-
### Azure Machine Learning Studio (classic)
162+
### Azure Machine Learning studio
163163
[!INCLUDE [applies-to-skus](../../../includes/aml-applies-to-enterprise-sku-inline.md)]
164164

165165
To set up alerts on your dataset monitor, the workspace that contains the dataset you want to create a monitor for must have Enterprise edition capabilities.
166166

167-
After the workspace functionality is confirmed, navigate to the Studio (classic)'s homepage and select the Datasets tab on the left. Select Dataset monitors.
167+
After the workspace functionality is confirmed, navigate to the studio's homepage and select the Datasets tab on the left. Select Dataset monitors.
168168

169169
![Monitor list](media/how-to-monitor-datasets/monitor-list.png)
170170

@@ -241,7 +241,7 @@ The **Drift overview** section contains top-level insights into the magnitude of
241241
| Data drift magnitude | Given as a percentage between the baseline and target dataset over time. Ranging from 0 to 100 where 0 indicates identical datasets and 100 indicates the Azure Machine Learning data drift capability can completely tell the two datasets apart. | Noise in the precise percentage measured is expected due to machine learning techniques being used to generate this magnitude. |
242242
| Drift contribution by feature | The contribution of each feature in the target dataset to the measured drift magnitude. | Due to covariate shift, the underlying distribution of a feature does not necessarily need to change to have relatively high feature importance. |
243243

244-
The following image is an example of charts seen in the **Drift overview** results in Azure Machine Learning Studio (classic), resulting from a backfill of [NOAA Integrated Surface Data](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/). Data was sampled to `stationName contains 'FLORIDA'`, with January 2019 being used as the baseline dataset and all 2019 data used as the target.
244+
The following image is an example of charts seen in the **Drift overview** results in Azure Machine Learning studio, resulting from a backfill of [NOAA Integrated Surface Data](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/). Data was sampled to `stationName contains 'FLORIDA'`, with January 2019 being used as the baseline dataset and all 2019 data used as the target.
245245

246246
![Drift overview](media/how-to-monitor-datasets/drift-overview.png)
247247

@@ -251,13 +251,13 @@ The **Feature details** section contains feature-level insights into the change
251251

252252
The target dataset is also profiled over time. The statistical distance between the baseline distribution of each feature is compared with the target dataset's over time, which is conceptually similar to the data drift magnitude with the exception that this statistical distance is for an individual feature. Min, max, and mean are also available.
253253

254-
In the Azure Machine Learning Studio (classic), if you click on a data point in the graph the distribution of the feature being shown will adjust accordingly. By default, it shows the baseline dataset's distribution and the most recent run's distribution of the same feature.
254+
In the Azure Machine Learning studio, if you click on a data point in the graph the distribution of the feature being shown will adjust accordingly. By default, it shows the baseline dataset's distribution and the most recent run's distribution of the same feature.
255255

256256
These metrics can also be retrieved in the Python SDK through the `get_metrics()` method on a `DataDriftDetector` object.
257257

258258
#### Numeric features
259259

260-
Numeric features are profiled in each dataset monitor run. The following are exposed in the Azure Machine Learning Studio (classic). Probability density is shown for the distribution.
260+
Numeric features are profiled in each dataset monitor run. The following are exposed in the Azure Machine Learning studio. Probability density is shown for the distribution.
261261

262262
| Metric | Description |
263263
| ------ | ----------- |
@@ -270,7 +270,7 @@ Numeric features are profiled in each dataset monitor run. The following are exp
270270

271271
#### Categorical features
272272

273-
Numeric features are profiled in each dataset monitor run. The following are exposed in the Azure Machine Learning Studio (classic). A histogram is shown for the distribution.
273+
Numeric features are profiled in each dataset monitor run. The following are exposed in the Azure Machine Learning studio. A histogram is shown for the distribution.
274274

275275
| Metric | Description |
276276
| ------ | ----------- |
@@ -322,6 +322,6 @@ Columns, or features, in the dataset are classified as categorical or numeric ba
322322

323323
## Next steps
324324

325-
* Head to the [Azure Machine Learning Studio (classic)](https://ml.azure.com) or the [Python notebook](https://aka.ms/datadrift-notebook) to set up a dataset monitor.
325+
* Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://aka.ms/datadrift-notebook) to set up a dataset monitor.
326326
* See how to set up data drift on [models deployed to Azure Kubernetes Service](how-to-monitor-data-drift.md).
327327
* Set up dataset drift monitors with [event grid](how-to-use-event-grid.md).

articles/machine-learning/service/resource-known-issues.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -173,11 +173,11 @@ If you see a `FailToSendFeather` error when reading data on Azure Databricks clu
173173

174174
## Azure portal
175175

176-
If you go directly to view your workspace from a share link from the SDK or the portal, you will not be able to view the normal Overview page with subscription information in the extension. You will also not be able to switch into another workspace. If you need to view another workspace, the workaround is to go directly to [Azure Machine Learning Studio (classic)](https://ml.azure.com) and search for the workspace name.
176+
If you go directly to view your workspace from a share link from the SDK or the portal, you will not be able to view the normal Overview page with subscription information in the extension. You will also not be able to switch into another workspace. If you need to view another workspace, the workaround is to go directly to [Azure Machine Learning studio](https://ml.azure.com) and search for the workspace name.
177177

178178
## Diagnostic logs
179179

180-
Sometimes it can be helpful if you can provide diagnostic information when asking for help. To see some logs, visit [Azure Machine Learning Studio (classic)](https://ml.azure.com) and go to your workspace and select **Workspace > Experiment > Run > Logs**.
180+
Sometimes it can be helpful if you can provide diagnostic information when asking for help. To see some logs, visit [Azure Machine Learning studio](https://ml.azure.com) and go to your workspace and select **Workspace > Experiment > Run > Logs**.
181181

182182
> [!NOTE]
183183
> Azure Machine Learning logs information from a variety of sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs are not documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.

0 commit comments

Comments
 (0)