You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline jobs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across jobs.
19
+
The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
22
20
23
21
Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
@@ -90,9 +88,9 @@ logger.warning("I will be sent to Application Insights")
90
88
91
89
## Logging with Custom Dimensions
92
90
93
-
By default, logs forwarded to Application Insights won't have enough context to trace back to the job or experiment. To make the logs actionable for diagnosing issues, additional fields are needed.
91
+
By default, logs forwarded to Application Insights won't have enough context to trace back to the run or experiment. To make the logs actionable for diagnosing issues, additional fields are needed.
94
92
95
-
To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline job.
93
+
To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline run.
96
94
97
95
Custom Dimensions make up a dictionary of key-value (stored as string, string) pairs. The dictionary is then sent to Application Insights and displayed as a column in the query results. Its individual dimensions can be used as [query parameters](#additional-helpful-queries).
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
167
165
168
-
Navigate to the **Experiments** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
166
+
Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
169
167
170
168
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
171
169
@@ -176,7 +174,7 @@ You can also edit the job list table to select multiple jobs and display either
176
174
177
175
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
178
176
179
-
1. Navigate to the **Experiments** tab.
177
+
1. Navigate to the **Jobs** tab.
180
178
1. Select the runID for a specific run.
181
179
1. Select **Outputs and logs** at the top of the page.
182
180
2. Select **Download all** to download all your logs into a zip folder.
@@ -183,7 +181,7 @@ The following example shows how to use the fairness package. We will upload mode
183
181
```python
184
182
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
185
183
```
186
-
Create an Experiment, then a Job, and upload the dashboard to it:
184
+
Create an Experiment, then a Run, and upload the dashboard to it:
187
185
```python
188
186
exp = Experiment(ws, "Test_Fairness_Census_Demo")
189
187
print(exp)
@@ -209,10 +207,10 @@ The following example shows how to use the fairness package. We will upload mode
209
207
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female).
210
208
Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
211
209
212
-
***Experiments pane (Preview)**
213
-
1. Select **Experiments**in the left pane to see a list of experiments that you've run on Azure Machine Learning.
214
-
1. Select a particular experiment to view all the jobsin that experiment.
215
-
1. Select a job, and then the **Fairness** tab to the explanation visualization dashboard.
210
+
***Jobs pane (Preview)**
211
+
1. Select **Jobs**in the left pane to see a list of experiments that you've run on Azure Machine Learning.
212
+
1. Select a particular experiment to view all the runsin that experiment.
213
+
1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard.
216
214
1. Once landing on the **Fairness** tab, click on a **fairness id**from the menu on the right.
217
215
1. Configure your dashboard by selecting your sensitive attribute, performance metric, and fairness metric of interest to land on the fairness assessment page.
218
216
1. Switch chart typefrom one to another to observe both **allocation** harms and**quality of service** harms.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-manage-environments-in-studio.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,6 @@ ms.topic: how-to
12
12
ms.custom: devx-track-python
13
13
---
14
14
15
-
[//]: #(needs PM review)
16
-
17
15
# Manage software environments in Azure Machine Learning studio
18
16
19
17
In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) in the Azure Machine Learning studio. Use the environments to track and reproduce your projects' software dependencies as they evolve.
0 commit comments