You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The transition from debugging a scoring script locally to debugging a scoring script in an actual pipeline can be a difficult leap. For information on finding your logs in the portal, see [machine learning pipelines section on debugging scripts from a remote context](how-to-debug-pipelines.md). The information in that section also applies to a ParallelRunStep.
162
162
163
-
For example, the log file `70_driver_log.txt` contains information from the controller that launches the ParallelRunStep code.
164
-
165
163
Because of the distributed nature of ParallelRunStep jobs, there are logs from several different sources. However, two consolidated files are created that provide high-level information:
166
164
167
165
-`~/logs/job_progress_overview.txt`: This file provides a high-level info about the number of mini-batches (also known as tasks) created so far and number of mini-batches processed so far. At this end, it shows the result of the job. If the job failed, it will show the error message and where to start the troubleshooting.
168
166
167
+
-`~/logs/job_result.txt`: Tt shows the result of the job. If the job failed, it will show the error message and where to start the troubleshooting.
168
+
169
+
-`~/logs/job_error.txt`: This file will try to summarize the errors in your script.
170
+
169
171
-`~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. Includes task creation, progress monitoring, the run result.
170
172
173
+
-`~/logs/sys/job_report/processed_mini-batches.csv`: A table of all minibatches that has been processed. It shows result of each run of minibatch, its execution agent node id and process name. Also, the elapsed time and error messages are included. Logs for each run of minibatches can be found by following the node id and process name.
174
+
171
175
Logs generated from entry script using EntryScript helper and print statements will be found in following files:
172
176
173
177
-`~/logs/user/entry_script_log/<node_id>/<process_name>.log.txt`: These files are the logs written from entry_script using EntryScript helper.
@@ -176,15 +180,13 @@ Logs generated from entry script using EntryScript helper and print statements w
176
180
177
181
-`~/logs/user/stderr/<node_id>/<process_name>.stderr.txt`: These files are the logs from stderr of entry_script.
178
182
179
-
For a concise understanding of errors in your script there is:
180
183
181
-
-`~/logs/user/error.txt`: This file will try to summarize the errors in your script.
184
+
For example, as the screenshot shows minibatch 0 failed on node 1 process000. The corresponding logs for your entry script can be found in `~/logs/user/entry_script_log/1/process000.log.txt`, `~/logs/user/stdout/1/process000.log.txt` and `~/logs/user/stderr/1/process000.log.txt`
182
185
183
-
For more information on errors in your script, there is:
-`~/logs/user/error/`: Contains full stack traces of exceptions thrown while loading and running entry script.
186
188
187
-
When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
189
+
When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `~/logs/sys/node` folder, grouped by worker nodes:
188
190
189
191
-`~/logs/sys/node/<node_id>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:
190
192
@@ -203,6 +205,9 @@ You can also view the results of periodical checks of the resource usage for eac
203
205
-`node_resource_usage.csv`: Resource usage overview of the node.
204
206
-`processes_resource_usage.csv`: Resource usage overview of each process.
205
207
208
+
## My job failed with SystemExit: 42. What does it mean?
209
+
This is PRS designed exit code. The failure reason can be found in `~/logs/job_result.txt`. You can follow previous section to debug your job.
210
+
206
211
## How do I log from my user script from a remote context?
207
212
208
213
ParallelRunStep may run multiple processes on one node based on process_count_per_node. In order to organize logs from each process on node and combine print and log statement, we recommend using ParallelRunStep logger as shown below. You get a logger from EntryScript and make the logs show up in **logs/user** folder in the portal.
0 commit comments