Skip to content

Commit 767cc87

Browse files
committed
Updates from PM review
1 parent d31a7ac commit 767cc87

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

articles/machine-learning/how-to-machine-learning-fairness-aml.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ To compare multiple models and see how their fairness assessments differ, you ca
306306
```python
307307
from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id
308308
```
309-
Create an Experiment, then a Job, and upload the dashboard to it:
309+
Create an Experiment, then a Run, and upload the dashboard to it:
310310
```python
311311
exp = Experiment(ws, "Compare_Two_Models_Fairness_Census_Demo")
312312
print(exp)

articles/machine-learning/how-to-machine-learning-interpretability-aml.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ In this how-to guide, you learn to use the interpretability package of the Azure
2626

2727
* Explain the behavior for the entire model and individual predictions in Azure.
2828

29-
* Upload explanations to Azure Machine Learning Job History.
29+
* Upload explanations to Azure Machine Learning Run History.
3030

3131
* Use a visualization dashboard to interact with your model explanations, both in a Jupyter Notebook and in the Azure Machine Learning studio.
3232

@@ -229,9 +229,9 @@ tabular_explainer = TabularExplainer(clf.steps[-1][1],
229229
transformations=transformations)
230230
```
231231
232-
## Generate feature importance values via remote jobs
232+
## Generate feature importance values via remote runs
233233
234-
The following example shows how you can use the `ExplanationClient` class to enable model interpretability for remote jobs. It’s conceptually similar to the local process, except you:
234+
The following example shows how you can use the `ExplanationClient` class to enable model interpretability for remote runs. It’s conceptually similar to the local process, except you:
235235
236236
* Use the `ExplanationClient` in the remote run to upload the interpretability context.
237237
* Download the context later in a local environment.

0 commit comments

Comments
 (0)