You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/stream-analytics/stream-analytics-job-diagram-with-metrics-new.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Data-driven debugging with the job diagram (preview) in Azure portal
2
+
title: Debugging with the job diagram (preview) in Azure portal
3
3
description: This article describes how to troubleshoot your Azure Stream Analytics job with job diagram and metrics in the Azure portal.
4
4
author: xujiang1
5
5
ms.author: xujiang1
@@ -8,20 +8,20 @@ ms.topic: how-to
8
8
ms.date: 07/01/2022
9
9
---
10
10
11
-
# Data-driven debugging with the job diagram (preview) in Azure portal
11
+
# Debugging with the job diagram (preview) in Azure portal
12
12
13
-
The job diagram (preview) on the blade in the Azure portal can help you visualize your job's query steps with its input source, output destination, and metrics. You can use the job diagram (preview) to examine the metrics data for each step and to more quickly isolate the source of a problem when you troubleshoot issues.
13
+
The job diagram in the Azure portal can help you visualize your job's query steps with its input source, output destination, and metrics. You can use the job diagram to examine the metrics for each step and quickly identify the source of a problem when you troubleshoot issues.
14
14
15
-
The job diagram (preview) is also available in VScode ASA extension. It provides the similar functions with more metrics data when you debug your job with local run service. To learn more details, see [Debug Azure Stream Analytics queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md).
15
+
The job diagram is also available in Stream Analytics extension for VS Code. It provides the similar functions with more metrics when you debug your job that runs locally on your device. To learn more details, see [Debug Azure Stream Analytics queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md).
16
16
17
-
## Using the job diagram (preview)
17
+
## Using the job diagram
18
18
19
19
In the Azure portal, while in a Stream Analytics job, under **Support + troubleshooting**, select **Job diagram (preview)**:
20
20
21
21
:::image type="content" source="./media/stream-analytics-job-diagram-with-metrics-new/1-stream-analytics-job-diagram-with-metrics-portal.png" alt-text="Job diagram with metrics - location":::
22
22
23
23
24
-
The job level default metrics data (Watermark delay, Input events, Output Events, and Backlogged Input Events) are shown in the chart section for the latest 30 minutes if you don't select any steps in diagram section. Of course, you can choose other metrics in the left side.
24
+
The job level default metrics such as Watermark delay, Input events, Output Events, and Backlogged Input Events are shown in the chart section for the latest 30 minutes. You can visualize other metrics in a chart by selecting them in the left pane.
@@ -43,11 +43,11 @@ It also provides the job operation actions in the menu section. You can use them
43
43
44
44
## Troubleshoot with metrics
45
45
46
-
Job metrics data provides lots of insights to your job's health. You can check these metrics data through the job diagram (preview) in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
46
+
A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
47
47
48
48
### Is the job running well with its computation resource?
49
49
50
-
***SU % utilization** is the percentage of memory utilized by your job. If SU % utilization is consistently over 80%, it shows the job is approaching to the maximum allocated memory.
50
+
***SU (Memory) % utilization** is the percentage of memory utilized by your job. If SU (Memory) % utilization is consistently over 80%, it shows the job is approaching to the maximum allocated memory.
51
51
***CPU % utilization** is the percentage of CPU utilized by your job. There might be spikes intermittently for this metric. Thus, we often check its average percentage data. High CPU utilization indicates that there might be CPU bottleneck if the number of backlogged input events or watermark delay increases at the same time.
52
52
53
53
@@ -68,9 +68,9 @@ The input data related metrics can be viewed under **Input** category in the cha
68
68
69
69
***Out of order events** is the number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the **"Out of order events"** setting under **Event ordering** section in Azure portal.
70
70
71
-
### Are we falling behind in data processing?
71
+
### Is the job falling behind in processing input data streams?
72
72
73
-
***Backlogged input events** tells you how many more messages from the input need to be processed. When this number is greater than 0, it means your job can't process the data as fast as it's coming in. In this case you may need to increase the number of Streaming Units and/or make sure your job can be parallelized. You can see more info on this in the [query parallelization page](./stream-analytics-parallelization.md).
73
+
***Backlogged input events** tells you how many more messages from the input need to be processed. When this number is consistently greater than 0, it means your job can't process the data as fast as it's coming in. In this case you may need to increase the number of Streaming Units and/or make sure your job can be parallelized. You can see more info on this in the [query parallelization page](./stream-analytics-parallelization.md).
0 commit comments