Skip to content

Commit 9b8fa9c

Browse files
authored
Merge pull request #101304 from Blackmist/debug-parallelrunstep
Debug parallelrunstep
2 parents 000b0a2 + 3399697 commit 9b8fa9c

16 files changed

+141
-223
lines changed

.openpublishing.redirection.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,16 @@
165165
"redirect_url": "/azure/machine-learning/service/how-to-deploy-fpga-web-service",
166166
"redirect_document_id": false
167167
},
168+
{
169+
"source_path": "articles/machine-learning/how-to-debug-batch-predictions.md",
170+
"redirect_url": "/azure/machine-learning/how-to-debug-parallel-run-step",
171+
"redirect_document_id": false
172+
},
173+
{
174+
"source_path": "articles/machine-learning/how-to-run-batch-predictions.md",
175+
"redirect_url": "/azure/machine-learning/how-to-use-parallel-run-step",
176+
"redirect_document_id": false
177+
},
168178
{
169179
"source_path": "articles/machine-learning/service/quickstart-run-local-notebook.md",
170180
"redirect_url": "/azure/machine-learning/service/how-to-configure-environment#local",

articles/machine-learning/azure-machine-learning-release-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1486,7 +1486,7 @@ Azure Machine Learning Compute can be created in Python, using Azure portal, or
14861486
+ ML Pipelines
14871487
+ New and updated notebooks for getting started with pipelines, batch scoping, and style transfer examples: https://aka.ms/aml-pipeline-notebooks
14881488
+ Learn how to [create your first pipeline](how-to-create-your-first-pipeline.md)
1489-
+ Learn how to [run batch predictions using pipelines](how-to-run-batch-predictions.md)
1489+
+ Learn how to [run batch predictions using pipelines](how-to-use-parallel-run-step.md)
14901490
+ Azure Machine Learning compute target
14911491
+ [Sample notebooks](https://aka.ms/aml-notebooks) are now updated to use the new managed compute.
14921492
+ [Learn about this compute](how-to-set-up-training-targets.md#amlcompute)

articles/machine-learning/concept-enterprise-security.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@ Here are the details:
330330

331331
* [Secure Azure Machine Learning web services with SSL](how-to-secure-web-service.md)
332332
* [Consume a Machine Learning model deployed as a web service](how-to-consume-web-service.md)
333-
* [How to run batch predictions](how-to-run-batch-predictions.md)
333+
* [How to run batch predictions](how-to-use-parallel-run-step.md)
334334
* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
335335
* [Collect data for models in production](how-to-enable-data-collection.md)
336336
* [Azure Machine Learning SDK](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py)

articles/machine-learning/concept-model-management-and-deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ You also provide the configuration of the target deployment platform. For exampl
8585
When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service and interact with IoT Edge.
8686

8787
#### Batch scoring
88-
Batch scoring is supported through ML pipelines. For more information, see [Batch predictions on big data](how-to-run-batch-predictions.md).
88+
Batch scoring is supported through ML pipelines. For more information, see [Batch predictions on big data](how-to-use-parallel-run-step.md).
8989

9090
#### Real-time web services
9191

articles/machine-learning/how-to-access-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -259,7 +259,7 @@ Azure Machine Learning provides several ways to use your models for scoring. Som
259259

260260
| Method | Datastore access | Description |
261261
| ----- | :-----: | ----- |
262-
| [Batch prediction](how-to-run-batch-predictions.md) || Make predictions on large quantities of data asynchronously. |
262+
| [Batch prediction](how-to-use-parallel-run-step.md) || Make predictions on large quantities of data asynchronously. |
263263
| [Web service](how-to-deploy-and-where.md) |   | Deploy models as a web service. |
264264
| [Azure IoT Edge module](how-to-deploy-and-where.md) |   | Deploy models to IoT Edge devices. |
265265

articles/machine-learning/how-to-debug-batch-predictions.md

Lines changed: 0 additions & 187 deletions
This file was deleted.
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
---
2+
title: Debug and troubleshoot ParallelRunStep
3+
titleSuffix: Azure Machine Learning
4+
description: Debug and troubleshoot ParallelRunStep in machine learning pipelines in the Azure Machine Learning SDK for Python. Learn common pitfalls for developing with pipelines, and tips to help you debug your scripts before and during remote execution.
5+
services: machine-learning
6+
ms.service: machine-learning
7+
ms.subservice: core
8+
ms.topic: conceptual
9+
ms.reviewer: trbye, jmartens, larryfr, vaidyas
10+
ms.author: trmccorm
11+
author: tmccrmck
12+
ms.date: 01/15/2020
13+
---
14+
15+
# Debug and troubleshoot ParallelRunStep
16+
[!INCLUDE [applies-to-skus](../../includes/aml-applies-to-basic-enterprise-sku.md)]
17+
18+
In this article, you learn how to debug and troubleshoot the [ParallelRunStep](https://docs.microsoft.com/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) class from the [Azure Machine Learning SDK](https://docs.microsoft.com/python/api/overview/azure/ml/intro?view=azure-ml-py).
19+
20+
## Testing scripts locally
21+
22+
See the [Testing scripts locally section](how-to-debug-pipelines.md#testing-scripts-locally) for machine learning pipelines. Your ParallelRunStep runs as a step in ML pipelines so the same answer applies to both.
23+
24+
## Debugging scripts from remote context
25+
26+
The transition from debugging a scoring script locally to debugging a scoring script in an actual pipeline can be a difficult leap. For information on finding your logs in the portal, the [machine learning pipelines section on debugging scripts from a remote context](how-to-debug-pipelines.md#debugging-scripts-from-remote-context). The information in that section also applies to a parallel step run.
27+
28+
For example, the log file `70_driver_log.txt` contains information from the controller that launches parallel run step code.
29+
30+
Because of the distributed nature of parallel run jobs, there are logs from several different sources. However, two consolidated files are created that provide high-level information:
31+
32+
- `~/logs/overview.txt`: This file provides a high-level info about the number of mini-batches (also known as tasks) created so far and number of mini-batches processed so far. At this end, it shows the result of the job. If the job failed, it will show the error message and where to start the troubleshooting.
33+
34+
- `~/logs/sys/master.txt`: This file provides the master node (also known as the orchestrator) view of the running job. Includes task creation, progress monitoring, the run result.
35+
36+
Logs generated from entry script using EntryScript.logger and print statements will be found in following files:
37+
38+
- `~/logs/user/<ip_address>/Process-*.txt`: This file contains logs written from entry_script using EntryScript.logger. It also contains print statement (stdout) from entry_script.
39+
40+
When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/worker` folder, grouped by worker nodes:
41+
42+
- `~/logs/sys/worker/<ip_address>/Process-*.txt`: This file provides detailed info about each mini-batch as it is picked up or completed by a worker. For each mini-batch, this file includes:
43+
44+
- The IP address and the PID of the worker process.
45+
- The total number of items, successfully processed items count, and failed item count.
46+
- The start time, duration, process time and run method time.
47+
48+
You can also find information on the resource usage of the processes for each worker. This information is in CSV format and is located at `~/logs/sys/perf/<ip_address>/`. For a single node, job files will be available under `~logs/sys/perf`. For example, when checking for resource utilization, look at the following files:
49+
50+
- `Process-*.csv`: Per worker process resource usage.
51+
- `sys.csv`: Per node log.
52+
53+
### How do I log from my user script from a remote context?
54+
You can get a logger from EntryScript as shown in below sample code to make the logs show up in **logs/user** folder in the portal.
55+
56+
**A sample entry script using the logger:**
57+
```python
58+
from entry_script import EntryScript
59+
60+
def init():
61+
""" Initialize the node."""
62+
entry_script = EntryScript()
63+
logger = entry_script.logger
64+
logger.debug("This will show up in files under logs/user on the Azure portal.")
65+
66+
67+
def run(mini_batch):
68+
""" Accept and return the list back."""
69+
# This class is in singleton pattern and will return same instance as the one in init()
70+
entry_script = EntryScript()
71+
logger = entry_script.logger
72+
logger.debug(f"{__file__}: {mini_batch}.")
73+
...
74+
75+
return mini_batch
76+
```
77+
78+
### How could I pass a side input such as, a file or file(s) containing a lookup table, to all my workers?
79+
80+
Construct a [Dataset](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) object containing the side input and register with your workspace. After that you can access it in your inference script (for example, in your init() method) as follows:
81+
82+
```python
83+
from azureml.core.run import Run
84+
from azureml.core.dataset import Dataset
85+
86+
ws = Run.get_context().experiment.workspace
87+
lookup_ds = Dataset.get_by_name(ws, "<registered-name>")
88+
lookup_ds.download(target_path='.', overwrite=True)
89+
```
90+
91+
## Next steps
92+
93+
* See the SDK reference for help with the [azureml-contrib-pipeline-step](https://docs.microsoft.com/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps?view=azure-ml-py) package and the [documentation](https://docs.microsoft.com/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunstep?view=azure-ml-py) for ParallelRunStep class.
94+
95+
* Follow the [advanced tutorial](tutorial-pipeline-batch-scoring-classification.md) on using pipelines with parallel run step.

articles/machine-learning/how-to-deploy-and-where.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ To deploy the model, you need the following items:
181181
>
182182
> * The Azure Machine Learning SDK doesn't provide a way for web services or IoT Edge deployments to access your data store or datasets. If your deployed model needs to access data stored outside the deployment, like data in an Azure storage account, you must develop a custom code solution by using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
183183
>
184-
> An alternative that might work for your scenario is [batch prediction](how-to-run-batch-predictions.md), which does provide access to data stores during scoring.
184+
> An alternative that might work for your scenario is [batch prediction](how-to-use-parallel-run-step.md), which does provide access to data stores during scoring.
185185

186186
* **Dependencies**, like helper scripts or Python/Conda packages required to run the entry script or model.
187187

articles/machine-learning/how-to-deploy-app-service.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Before deploying, you must define what is needed to run the model as a web servi
6262
> [!IMPORTANT]
6363
> The Azure Machine Learning SDK does not provide a way for the web service access your datastore or data sets. If you need the deployed model to access data stored outside the deployment, such as in an Azure Storage account, you must develop a custom code solution using the relevant SDK. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python).
6464
>
65-
> Another alternative that may work for your scenario is [batch predictions](how-to-run-batch-predictions.md), which does provide access to datastores when scoring.
65+
> Another alternative that may work for your scenario is [batch predictions](how-to-use-parallel-run-step.md), which does provide access to datastores when scoring.
6666
6767
For more information on entry scripts, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
6868

0 commit comments

Comments
 (0)