Skip to content

Commit d31a7ac

Browse files
committed
Updates from PM review
1 parent 3bae6d7 commit d31a7ac

File tree

4 files changed

+11
-19
lines changed

4 files changed

+11
-19
lines changed

articles/machine-learning/how-to-log-pipelines-application-insights.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,11 @@ ms.topic: how-to
1212
ms.custom: devx-track-python, sdkv1, event-tier1-build-2022
1313
---
1414

15-
[//]: # (needs PM review; pipeline jobs? Or pipeline runs? lots of code, please change the code as needed )
16-
1715
# Collect machine learning pipeline log files in Application Insights for alerts and debugging
1816

1917
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
2018

21-
The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline jobs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across jobs.
19+
The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
2220

2321
Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
2422

@@ -64,7 +62,7 @@ sample_step = PythonScriptStep(
6462
runconfig=run_config
6563
)
6664

67-
# Submit new pipeline job
65+
# Submit new pipeline run
6866
pipeline = Pipeline(workspace=ws, steps=[sample_step])
6967
pipeline.submit(experiment_name="Logging_Experiment")
7068
```
@@ -90,9 +88,9 @@ logger.warning("I will be sent to Application Insights")
9088

9189
## Logging with Custom Dimensions
9290

93-
By default, logs forwarded to Application Insights won't have enough context to trace back to the job or experiment. To make the logs actionable for diagnosing issues, additional fields are needed.
91+
By default, logs forwarded to Application Insights won't have enough context to trace back to the run or experiment. To make the logs actionable for diagnosing issues, additional fields are needed.
9492

95-
To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline job.
93+
To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline run.
9694

9795
Custom Dimensions make up a dictionary of key-value (stored as string, string) pairs. The dictionary is then sent to Application Insights and displayed as a column in the query results. Its individual dimensions can be used as [query parameters](#additional-helpful-queries).
9896

articles/machine-learning/how-to-log-view-metrics.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ ms.topic: how-to
1313
ms.custom: sdkv1, event-tier1-build-2022
1414
---
1515

16-
[//]: # (needs PM review; Lots of code, what needs to be changed?)
17-
1816
# Log & view metrics and log files
1917

2018
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"]
@@ -165,7 +163,7 @@ params = finished_mlflow_run.data.params
165163
166164
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
167165
168-
Navigate to the **Experiments** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
166+
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
169167
170168
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
171169
@@ -176,7 +174,7 @@ You can also edit the job list table to select multiple jobs and display either
176174
177175
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
178176
179-
1. Navigate to the **Experiments** tab.
177+
1. Navigate to the **Jobs** tab.
180178
1. Select the runID for a specific run.
181179
1. Select **Outputs and logs** at the top of the page.
182180
2. Select **Download all** to download all your logs into a zip folder.

articles/machine-learning/how-to-machine-learning-fairness-aml.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@ ms.topic: how-to
1313
ms.custom: devx-track-python, responsible-ml, sdkv1, event-tier1-build-2022
1414
---
1515

16-
[//]: # (needs PM review; what happens with the code?)
17-
1816
# Use Azure Machine Learning with the Fairlearn open-source package to assess the fairness of ML models (preview)
1917

2018
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
@@ -183,7 +181,7 @@ The following example shows how to use the fairness package. We will upload mode
183181
```python
184182
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
185183
```
186-
Create an Experiment, then a Job, and upload the dashboard to it:
184+
Create an Experiment, then a Run, and upload the dashboard to it:
187185
```python
188186
exp = Experiment(ws, "Test_Fairness_Census_Demo")
189187
print(exp)
@@ -209,10 +207,10 @@ The following example shows how to use the fairness package. We will upload mode
209207
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female).
210208
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
211209

212-
* **Experiments pane (Preview)**
213-
1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
214-
1. Select a particular experiment to view all the jobs in that experiment.
215-
1. Select a job, and then the **Fairness** tab to the explanation visualization dashboard.
210+
* **Jobs pane (Preview)**
211+
1. Select **Jobs** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
212+
1. Select a particular experiment to view all the runs in that experiment.
213+
1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard.
216214
1. Once landing on the **Fairness** tab, click on a **fairness id** from the menu on the right.
217215
1. Configure your dashboard by selecting your sensitive attribute, performance metric, and fairness metric of interest to land on the fairness assessment page.
218216
1. Switch chart type from one to another to observe both **allocation** harms and **quality of service** harms.

articles/machine-learning/how-to-manage-environments-in-studio.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,6 @@ ms.topic: how-to
1212
ms.custom: devx-track-python
1313
---
1414

15-
[//]: # (needs PM review)
16-
1715
# Manage software environments in Azure Machine Learning studio
1816

1917
In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) in the Azure Machine Learning studio. Use the environments to track and reproduce your projects' software dependencies as they evolve.

0 commit comments

Comments
 (0)