You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The default value for all metric thresholds is `70`. If you want to set the metric threshold to another number, open the *\*.AutoscaleSettingTemplate.parameters.json* file and change the `Threshold` value.
Copy file name to clipboardExpand all lines: articles/stream-analytics/data-errors.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ This article outlines the different error types, causes, and resource log detail
16
16
17
17
## Resource Logs schema
18
18
19
-
See [Troubleshoot Azure Stream Analytics by using diagnostics logs](stream-analytics-job-diagnostic-logs.md#resource-logs-schema) to see the schema for resource logs. The following JSON is an example value for the **Properties** field of a resource log for a data error.
19
+
See [Troubleshoot Azure Stream Analytics by using diagnostics logs](monitor-azure-stream-analytics-reference.md#resource-logs-schema) to see the schema for resource logs. The following JSON is an example value for the **Properties** field of a resource log for a data error.
3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
54
+
3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics).
5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Live output sinks aren't supported.
64
+
5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics). Live output sinks aren't supported.
Copy file name to clipboardExpand all lines: articles/stream-analytics/includes/metrics-dimensions.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,6 @@ ms.custom: "include file"
12
12
13
13
14
14
15
-
Stream Analytics has [many metrics](../stream-analytics-job-metrics.md) available to monitor a job's health. To troubleshoot performance problems with your job, you can split and filter metrics by using the following dimensions.
Copy file name to clipboardExpand all lines: articles/stream-analytics/job-diagram-with-metrics.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ The job diagram in the Azure portal can help you visualize your job's query step
15
15
16
16
There are two types of job diagrams:
17
17
18
-
***Physical diagram**: it visualizes the key metrics of Stream Analytics job with the physical computation concept: streaming node dimension. A streaming node represents a set of compute resources that's used to process job's input data. To learn more details about the streaming node dimension, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
18
+
***Physical diagram**: it visualizes the key metrics of Stream Analytics job with the physical computation concept: streaming node dimension. A streaming node represents a set of compute resources that's used to process job's input data. To learn more details about the streaming node dimension, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
19
19
20
20
Inside each streaming node, there are Stream Analytics processors available for processing the stream data. Each processor represents one or more steps in your query. You can visualize the processor topology in each streaming node by using the **processor diagram** in physical job diagram.
21
21
@@ -59,7 +59,7 @@ The following screenshot shows a physical job diagram with a default time period
For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
62
+
For more information about the metrics definition, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
63
63
1.**Chart section**: it's the place where you can view the historical metrics data within the selected time range. The default metrics shown in the default chart are **SU (Memory) % Utilization** and **CPU % Utilization**". You can also add more charts by clicking **Add chart**.
64
64
65
65
The **Diagram/Table section** and **Chart section** can be interactive with each other. You can select multiple nodes in **Diagram/Table section** to get the metrics in **Chart section** filtered by the selected nodes and vice versa.
@@ -109,7 +109,7 @@ The logical job diagram has a similar layout to the physical diagram, with three
1.**Command bar section**: in logical diagram, you can operate the cloud job (Stop, Delete), and configure the time range of the job metrics. The diagram view is only available for logical diagrams.
112
-
2.**Diagram section**: the node box in this selection represents the job's input, output, and query steps. You can view the metrics in the node directly or in the chart section interactively by clicking certain node in this section. For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
112
+
2.**Diagram section**: the node box in this selection represents the job's input, output, and query steps. You can view the metrics in the node directly or in the chart section interactively by clicking certain node in this section. For more information about the metrics definition, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
113
113
3.**Chart section**: the chart section in a logical diagram has two tabs: **Metrics** and **Activity Logs**.
114
114
***Metrics**: job's metrics data is shown here when the corresponding metrics are selected in the right panel.
115
115
***Activity Logs**: job's operations performed on jobs is shown here. When the job's diagnostic log is enabled, it's also shown here. To learn more about the job logs, see [Azure Stream Analytics job logs](./stream-analytics-job-diagnostic-logs.md).
@@ -126,7 +126,7 @@ To learn more about how to debug with logical diagrams, see [Debugging with the
126
126
## Next steps
127
127
*[Introduction to Stream Analytics](stream-analytics-introduction.md)
128
128
*[Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md)
Copy file name to clipboardExpand all lines: articles/stream-analytics/job-states.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,12 @@ A Stream Analytics job could be in one of four states at any given time: running
15
15
16
16
| State | Description | Recommended actions |
17
17
| --- | --- | --- |
18
-
|**Running**| Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It's a best practice to track your job’s performance by monitoring [key metrics](./stream-analytics-job-metrics.md#scenarios-to-monitor). |
18
+
|**Running**| Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It's a best practice to track your job’s performance by monitoring [key metrics](monitor-azure-stream-analytics.md#azure-stream-analytics-metrics). |
19
19
|**Stopped**| Your job is stopped and doesn't process events. | NA |
20
-
| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors that might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your job’s performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it's recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you can't take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](./stream-analytics-job-metrics.md#scenarios-to-monitor) metric to understand if these transient errors are impacting your job's performance.|
20
+
| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors that might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your job’s performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it's recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you can't take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](monitor-azure-stream-analytics.md#azure-stream-analytics-metrics) metric to understand if these transient errors are impacting your job's performance.|
21
21
|**Failed**| Your job encountered a critical error resulting in a failed state. Events aren't read and processed. Runtime errors are a common cause for jobs ending up in a failed state. | You can [configure alerts](./stream-analytics-set-up-alerts.md#set-up-alerts-in-the-azure-portal) so that you get notified when job goes to Failed state. <br> <br>You can debug using [activity and resource logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to identify root cause and address the issue.|
0 commit comments