Skip to content

Commit 6d3a266

Browse files
committed
Added model-loading code back. Modified discussion re dataports not baked objects
1 parent 14a4625 commit 6d3a266

File tree

1 file changed

+21
-9
lines changed

1 file changed

+21
-9
lines changed

articles/machine-learning/how-to-use-automlstep-in-pipelines.md

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -409,22 +409,20 @@ The code above combines the data preparation, automated ML, and model-registerin
409409

410410
### Examine pipeline results
411411

412-
Once the `run` completes, you can retrieve `PipelineData` objects that have been assigned a `pipeline_output_name`.
412+
Once the `run` completes, you can retrieve `PipelineData` objects that have been assigned a `pipeline_output_name`. You can download the results and load them for further processing.
413413

414414
```python
415-
metrics_output = run.get_pipeline_output('metrics_output')
416-
model_output = run.get_pipeline_output('model_output')
417-
```
418-
419-
You can work directly with the results or download and reload them at a later time for further processing.
415+
metrics_output_port = run.get_pipeline_output('metrics_output')
416+
model_output_port = run.get_pipeline_output('model_output')
420417

421-
```python
422418
metrics_output.download('.', show_progress=True)
423419
model_output.download('.', show_progress=True)
424420
```
425421

426422
Downloaded files are written to the sub-directory `azureml/{run.id}/`. The metrics file is JSON-formatted and can be converted into a Pandas dataframe for examination.
427423

424+
For local processing, you may need to install relevant packages, such as Pandas, Pickle, the AzureML SDK, and so forth.
425+
428426
```python
429427
import pandas as pd
430428
import json
@@ -440,9 +438,23 @@ df = pd.DataFrame(deserialized_metrics_output)
440438

441439
The code snippet above shows the metrics file being loaded from it's location on the Azure datastore. You can also load it from the downloaded file, as shown in the comment. Once you've deserialized it and converted it to a Pandas DataFrame, you can see detailed metrics for each of the iterations of the automated ML step.
442440

443-
The model file can be deserialized into a `Model` object that you can use for inferencing, further metrics analysis, and so forth. To load a `Model` locally, you'll need to have installed the Azure ML SDK. For more information on loading and working with existing models, see [Use an existing model with Azure Machine Learning](how-to-deploy-existing-model.md).
441+
The model file can be deserialized into a `Model` object that you can use for inferencing, further metrics analysis, and so forth.
442+
443+
```python
444+
import pickle
445+
446+
model_filename = model_output._path_on_datastore
447+
# model_filename = path to downloaded file
448+
449+
with open(model_filename, "rb" ) as f:
450+
best_model = pickle.load(f)
451+
452+
# ... inferencing code not shown ...
453+
```
454+
455+
For more information on loading and working with existing models, see [Use an existing model with Azure Machine Learning](how-to-deploy-existing-model.md).
444456

445-
### Download the results of an automated ML run
457+
### Download the results of an automated ML run
446458

447459
If you've been following along with the article, you'll have an instantiated `run` object. But you can also retrieve completed `Run` objects from the `Workspace` by way of an `Experiment` object.
448460

0 commit comments

Comments
 (0)