You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Use profiling to debug pipeline performance issues
16
16
17
17
The profiling feature in Azure Machine Learning studio can help you debug pipeline performance issues such as hanging or long durations. Profiling lists the duration of each pipeline step and provides a Gantt chart for visualization. You can see the time spent on each job status and quickly find steps that take longer than expected.
18
-
- .
19
18
20
19
## Find the node that runs the longest overall
21
20
22
21
1. On the **Jobs** page of Azure Machine Learning studio, select the job name to open the job detail page.
23
-
1. In the action bar, select **View profiling**. Profiling works only for root pipelines. It can take a few minutes to load the profiler page.
22
+
1. In the action bar, select **View profiling**. Profiling works only for root pipelines. The profiler page can take a few minutes to load.
24
23
25
24
:::image type="content" source="./media/how-to-debug-pipeline-performance/view-profiling-detail.png" alt-text="Screenshot showing the pipeline at root level with the View profiling button highlighted." lightbox= "./media/how-to-debug-pipeline-performance/view-profiling.png":::
26
25
27
-
To identify the step that takes the longest, view the Gantt chart on the profiler page. The length of each bar in the Gantt chart shows how long the step takes. The step with the longest bar length took the most time.
26
+
To identify the step that takes the longest, view the Gantt chart at the top of the profiler page. The length of each bar in the Gantt chart shows how long the step takes. The step with the longest bar length took the most time.
28
27
29
28
:::image type="content" source="./media/how-to-debug-pipeline-performance/critical-path.png" alt-text="Screenshot showing the Gantt chart and the critical path." lightbox= "./media/how-to-debug-pipeline-performance/critical-path.png":::
30
29
@@ -59,7 +58,7 @@ The following table presents the definition of each job status, the estimated ti
59
58
| Not started | The job is submitted from the client and accepted in Azure Machine Learning services. Most time is spent in service scheduling and preprocessing. | If there's no backend service issue, this time should be short.| Open a support case via the Azure portal. |
60
59
|Preparing | In this status, the job is pending for preparation of job dependencies, for example environment image building.| If you're using a curated or registered custom environment, this time should be short. | Check the image building log. |
61
60
|Inqueue | The job is pending for compute resource allocation. Duration of this stage mainly depends on the status of your compute cluster.| If you're using a cluster with enough compute resource, this time should be short. | Increase the max nodes of the target compute, change the job to another less busy compute, or modify job priority to get more compute resources for the job. |
62
-
|Running | The job is executing on the remote compute. This stage consists of runtime preparation, such as image pulling, docker starting, and data mounting or download, followed by user script execution. | This status is expected to be the most time consuming. | 1. Check the source code for any user error. <br> 2. View the monitoring tab for compute metrics like CPU, memory, and networking to identify any bottlenecks. <br> 3. If the job is running, try online debug with [interactive endpoints](how-to-interactive-jobs.md), or locally debug your code. |
61
+
|Running | The job is executing on the remote compute. This stage consists of: <br> 1. Runtime preparation, such as image pulling, docker starting, and data mounting or download. 2. User script execution. | This status is expected to be the most time consuming. | 1. Check the source code for any user error. <br> 2. View the monitoring tab for compute metrics like CPU, memory, and networking to identify any bottlenecks. <br> 3. If the job is running, try online debug with [interactive endpoints](how-to-interactive-jobs.md), or locally debug your code. |
63
62
| Finalizing | Job is in post-processing after execution completes. Time spent in this stage is mainly for post processes like uploading output, uploading metrics and logs, and cleaning up resources.| Time is expected to be short for command jobs. Duration might be long for polygenic risk score (PRS) or Message Passing Interface (MPI) jobs because for distributed jobs, finalizing lasts from the first node starting to the last node finishing. | Change your step job output mode from upload to mount if you find unexpected long finalizing time, or open a support case via the Azure portal. |
0 commit comments