You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-debug-parallel-run-step.md
+36-18Lines changed: 36 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,32 +23,37 @@ See the [Testing scripts locally section](how-to-debug-pipelines.md#testing-scri
23
23
24
24
## Debugging scripts from remote context
25
25
26
-
The transition from debugging a scoring script locally to debugging a scoring script in an actual pipeline can be a difficult leap. For information on finding your logs in the portal, the [machine learning pipelines section on debugging scripts from a remote context](how-to-debug-pipelines.md#debugging-scripts-from-remote-context). The information in that section also applies to a parallel step run.
26
+
The transition from debugging a scoring script locally to debugging a scoring script in an actual pipeline can be a difficult leap. For information on finding your logs in the portal, the [machine learning pipelines section on debugging scripts from a remote context](how-to-debug-pipelines.md#debugging-scripts-from-remote-context). The information in that section also applies to a ParallelRunStep.
27
27
28
-
For example, the log file `70_driver_log.txt` contains information from the controller that launches parallel run step code.
28
+
For example, the log file `70_driver_log.txt` contains information from the controller that launches the ParallelRunStep code.
29
29
30
-
Because of the distributed nature of parallel run jobs, there are logs from several different sources. However, two consolidated files are created that provide high-level information:
30
+
Because of the distributed nature of ParallelRunStep jobs, there are logs from several different sources. However, two consolidated files are created that provide high-level information:
31
31
32
32
-`~/logs/overview.txt`: This file provides a high-level info about the number of mini-batches (also known as tasks) created so far and number of mini-batches processed so far. At this end, it shows the result of the job. If the job failed, it will show the error message and where to start the troubleshooting.
33
33
34
34
-`~/logs/sys/master.txt`: This file provides the master node (also known as the orchestrator) view of the running job. Includes task creation, progress monitoring, the run result.
35
35
36
-
Logs generated from entry script using EntryScript.logger and print statements will be found in following files:
36
+
Logs generated from entry script using EntryScript helper and print statements will be found in following files:
37
37
38
-
-`~/logs/user/<ip_address>/Process-*.txt`: This file contains logs written from entry_script using EntryScript.logger. It also contains print statement (stdout) from entry_script.
38
+
-`~/logs/user/<node_name>.log.txt`: These are the logs written from entry_script using EntryScript helper. Also contains print statement (stdout) from entry_script.
39
39
40
-
When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/worker` folder, grouped by worker nodes:
40
+
For a concise understanding of errors in your script there is:
41
41
42
-
-`~/logs/sys/worker/<ip_address>/Process-*.txt`: This file provides detailed info about each mini-batch as it is picked up or completed by a worker. For each mini-batch, this file includes:
42
+
-`~/logs/user/error.txt`: This file will try to summarize the errors in your script.
43
+
44
+
For more information on errors in your script, there is:
45
+
46
+
-`~/logs/user/error/`: Contains all errors thrown and full stack traces organized by node.
47
+
48
+
When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
49
+
50
+
-`~/logs/sys/node/<node_name>.txt`: This file provides detailed info about each mini-batch as it is picked up or completed by a worker. For each mini-batch, this file includes:
43
51
44
52
- The IP address and the PID of the worker process.
45
53
- The total number of items, successfully processed items count, and failed item count.
46
54
- The start time, duration, process time and run method time.
47
55
48
-
You can also find information on the resource usage of the processes for each worker. This information is in CSV format and is located at `~/logs/sys/perf/<ip_address>/`. For a single node, job files will be available under `~logs/sys/perf`. For example, when checking for resource utilization, look at the following files:
49
-
50
-
-`Process-*.csv`: Per worker process resource usage.
51
-
-`sys.csv`: Per node log.
56
+
You can also find information on the resource usage of the processes for each worker. This information is in CSV format and is located at `~/logs/sys/perf/overview.csv`. For information about each process, it is available under `~logs/sys/processes.csv`.
52
57
53
58
### How do I log from my user script from a remote context?
54
59
You can get a logger from EntryScript as shown in below sample code to make the logs show up in **logs/user** folder in the portal.
@@ -77,19 +82,32 @@ def run(mini_batch):
77
82
78
83
### How could I pass a side input such as, a file or file(s) containing a lookup table, to all my workers?
79
84
80
-
Construct a [Dataset](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) object containing the side input and register with your workspace. After that you can access it in your inference script (for example, in your init() method) as follows:
85
+
Construct a [Dataset](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) containing the side input and register it with your workspace. Pass it to the `side_input` parameter of your `ParallelRunStep`. Additionally, you can add it's path in the `arguments` section to easily access it's mounted path:
* See the SDK reference for help with the [azureml-contrib-pipeline-step](https://docs.microsoft.com/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps?view=azure-ml-py) package and the [documentation](https://docs.microsoft.com/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunstep?view=azure-ml-py) for ParallelRunStep class.
94
112
95
-
* Follow the [advanced tutorial](tutorial-pipeline-batch-scoring-classification.md) on using pipelines with parallel run step.
113
+
* Follow the [advanced tutorial](tutorial-pipeline-batch-scoring-classification.md) on using pipelines with ParallelRunStep and for an example of passing another file as a side input.
0 commit comments