You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The snippet above creates the two `PipelineData` objects for the metrics and model output. Each is named, assigned to the default datastore retrieved earlier, and associated with the particular `type` of `TrainingOutput` from the `AutoMLStep`.
285
+
The snippet above creates the two `PipelineData` objects for the metrics and model output. Each is named, assigned to the default datastore retrieved earlier, and associated with the particular `type` of `TrainingOutput` from the `AutoMLStep`. Because we assign `pipeline_output_name` on these `PipelineData` objects, their values will be available not just from the individual pipeline step, but from the pipeline as a whole, as will be discussed below in the section "Examine pipeline results."
286
286
287
287
### Configure and create the automated ML pipeline step
288
288
@@ -407,16 +407,71 @@ run.wait_for_completion()
407
407
408
408
The code above combines the data preparation, automated ML, and model-registering steps into a `Pipeline` object. It then creates an `Experiment` object. The `Experiment` constructor will retrieve the named experiment if it exists or create it if necessary. It submits the `Pipeline` to the `Experiment`, creating a `Run` object that will asynchronously run the pipeline. The `wait_for_completion()` function blocks until the run completes.
409
409
410
+
### Examine pipeline results
411
+
412
+
Once the `run` completes, you can retrieve `PipelineData` objects that have been assigned a `pipeline_output_name`.
You can work directly with the results or download and reload them at a later time for further processing.
420
+
421
+
```python
422
+
metrics_output.download('.', show_progress=True)
423
+
model_output.download('.', show_progress=True)
424
+
```
425
+
426
+
Downloaded files are written to the sub-directory `azureml/{run.id}/`. The metrics file is JSON-formatted and can be converted into a Pandas dataframe for examination.
The code snippet above shows the metrics file being loaded from it's location on the Azure datastore. You can also load it from the downloaded file, as shown in the comment. Once you've deserialized it and converted it to a Pandas DataFrame, you can see detailed metrics for each of the iterations of the automated ML step.
442
+
443
+
The model file can be deserialized into a `Model` object that you can use for inferencing, further metrics analysis, and so forth.
444
+
445
+
```python
446
+
import pickle
447
+
448
+
model_filename = model_output._path_on_datastore
449
+
# model_filename = path to downloaded file
450
+
451
+
withopen(model_filename, "rb" ) as f:
452
+
best_model = pickle.load(f)
453
+
454
+
# ... inferencing code not shown ...
455
+
```
456
+
410
457
### Download the results of an automated ML run
411
458
412
-
While the `run` object in the code above is from the actively running context, you can also retrieve completed `Run` objects from the `Workspace` by way of an `Experiment` object.
459
+
If you've been following along with the article, you'll have an instantiated `run` object. But you can also retrieve completed `Run` objects from the `Workspace` by way of an `Experiment` object.
413
460
414
-
The workspace contains a complete record of all your experiments and runs. You can either use the portal to find and download the outputs of experiments or use code.
461
+
The workspace contains a complete record of all your experiments and runs. You can either use the portal to find and download the outputs of experiments or use code. To access the records from a historic run, use Azure Machine Learning to find the id of the run in which you are interested. With that, you can choose the specific `run` by way of the `Workspace` and `Experiment`.
415
462
416
463
```python
417
-
# Run on local machine
464
+
# Retrieved from Azure Machine Learning web UI
465
+
run_id ='aaaaaaaa-bbbb-cccc-dddd-0123456789AB'
418
466
experiment = ws.experiments['titanic_automl']
419
-
run =next(run for run in ex.get_runs() if run.id =='aaaaaaaa-bbbb-cccc-dddd-0123456789AB')
467
+
run =next(run for run in ex.get_runs() if run.id == run_id)
468
+
```
469
+
470
+
You would have to change the strings in the above code to the specifics of your historical run. The snippet above assumes that you've assigned `ws` to the relevant `Workspace` with the normal `from_config()`. The experiment of interest is directly retrieved and then the code finds the `Run` of interest by matching the `run.id` value.
471
+
472
+
Once you have a `Run` object, you can download the metrics and model.
473
+
474
+
```python
420
475
automl_run =next(r for r in run.get_children() if r.name =='AutoML_Classification')
The above snippet would run on your local machine. First, it logs on to the workspace. It retrieves the `Experiment` named `titanic_automl` and from that `Experiment`, the `Run` in which you're interested. Notice that you'd set the value being compared to `run.id` to that of the run in which you're interested.
430
-
431
-
Each `Run` object contains `StepRun` objects that contain information about the individual pipeline step run. The `run` is searched for the `StepRun` object for the `AutoMLStep`. The outputs are retrieved using their default names, which are available even if you don't pass `PipelineData` objects to the `outputs` parameter of the `AutoMLStep`.
432
-
433
-
Finally, the actual metrics and model are downloaded to your local machine for further processing.
484
+
Each `Run` object contains `StepRun` objects that contain information about the individual pipeline step run. The `run` is searched for the `StepRun` object for the `AutoMLStep`. The metrics and model are retrieved using their default names, which are available even if you don't pass `PipelineData` objects to the `outputs` parameter of the `AutoMLStep`.
434
485
486
+
Finally, the actual metrics and model are downloaded to your local machine, as was discussed in the "Examine pipeline results" section above.
0 commit comments