Skip to content

Commit 252a1f5

Browse files
committed
edit pass: stream-analytics-troubleshoot-output
1 parent f113d60 commit 252a1f5

File tree

1 file changed

+25
-27
lines changed

1 file changed

+25
-27
lines changed

articles/stream-analytics/stream-analytics-troubleshoot-output.md

Lines changed: 25 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,11 @@ ms.custom: seodec18
1212

1313
# Troubleshoot Azure Stream Analytics outputs
1414

15-
This article describes common issues with Azure Stream Analytics output connections and how to troubleshoot them. Many troubleshooting steps require that diagnostic logs be enabled for your Stream Analytics job. If you don't have diagnostic logs enabled, see [Troubleshoot Stream Analytics by using diagnostics logs](stream-analytics-job-diagnostic-logs.md).
15+
This article describes common issues with Azure Stream Analytics output connections and how to troubleshoot them. Many troubleshooting steps require diagnostic logs be enabled for your Stream Analytics job. If you don't have diagnostic logs enabled, see [Troubleshoot Stream Analytics by using diagnostics logs](stream-analytics-job-diagnostic-logs.md).
1616

1717
## The job doesn't produce output
1818

19-
If the job doesn't produce output, verify connectivity:
20-
21-
1. Verify connectivity to outputs using the **Test Connection** button for each output.
19+
1. Verify connectivity to outputs by using the **Test Connection** button for each output.
2220
1. Look at [Monitoring metrics](stream-analytics-monitoring.md) on the **Monitor** tab. Because the values are aggregated, the metrics are delayed by a few minutes.
2321

2422
* If the **Input Events** value is greater than zero, the job can read the input data. If the **Input Events** value isn't greater than zero, there's an issue with the job's input. See [Troubleshoot input connections](stream-analytics-troubleshoot-input.md) for more information.
@@ -41,57 +39,57 @@ Use discretion when designing your Stream Analytics query. If you use a large ti
4139

4240
One mitigation for this kind of first output delay is to use query parallelization techniques, such as partitioning the data. Or, you can add more Streaming Units to improve the throughput until the job catches up. For more information, see [Considerations when creating Stream Analytics jobs](stream-analytics-concepts-checkpoint-replay.md).
4341

44-
These factors impact the timeliness of the first output:
42+
These factors affect the timeliness of the first output:
4543

4644
* The use of windowed aggregates, such as a GROUP BY clause of tumbling, hopping, and sliding windows:
4745

4846
* For tumbling or hopping window aggregates, the results generate at the end of the window timeframe.
4947
* For a sliding window, the results generate when an event enters or exits the sliding window.
50-
* If you're planning to use a large window size, such as more than one hour, its best to choose a hopping or sliding window. These window types let you see the output more frequently.
48+
* If you're planning to use a large window size, such as more than one hour, it's best to choose a hopping or sliding window. These window types let you see the output more frequently.
5149

5250
* The use of temporal joins, such as JOIN with DATEDIFF:
5351
* Matches generate as soon as both sides of the matched events arrive.
54-
* Data that lacks a match, like LEFT OUTER JOIN, generates at the end of the DATEDIFF window, for each event on the left side.
52+
* Data that lacks a match, like LEFT OUTER JOIN, is generated at the end of the DATEDIFF window, for each event on the left side.
5553

5654
* The use of temporal analytic functions, such as ISFIRST, LAST, and LAG with LIMIT DURATION:
57-
* For analytic functions, the output generates for every event. There is no delay.
55+
* For analytic functions, the output is generated for every event. There is no delay.
5856

5957
## The output falls behind
6058

6159
During the normal operation of a job, the output might have longer and longer periods of latency. If the output falls behind like that, you can pinpoint the root causes by examining the following factors:
6260

63-
* Whether the downstream sink is throttled.
64-
* Whether the upstream source is throttled.
65-
* Whether the processing logic in the query is compute-intensive.
61+
* Whether the downstream sink is throttled
62+
* Whether the upstream source is throttled
63+
* Whether the processing logic in the query is compute-intensive
6664

67-
To see the output details, select the streaming job in the Azure portal, and then select **Job diagram**. For each input, there's a per partition backlog event metric. If the metric keeps increasing, its an indicator that the system resources are constrained. The increase is potentially due to output sink throttling, or high CPU usage. For more information, see [Data-driven debugging by using the job diagram](stream-analytics-job-diagram-with-metrics.md).
65+
To see the output details, select the streaming job in the Azure portal, and then select **Job diagram**. For each input, there's a backlog event metric per partition. If the metric keeps increasing, it's an indicator that the system resources are constrained. The increase is potentially due to output sink throttling, or high CPU usage. For more information, see [Data-driven debugging by using the job diagram](stream-analytics-job-diagram-with-metrics.md).
6866

6967
## Key violation warning with Azure SQL Database output
7068

71-
When you configure an Azure SQL Database as output to a Stream Analytics job, it bulk inserts records into the destination table. In general, Stream Analytics guarantees [at least once delivery](https://docs.microsoft.com/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics) to the output sink. You can still [achieve exactly-once delivery]( https://blogs.msdn.microsoft.com/streamanalytics/2017/01/13/how-to-achieve-exactly-once-delivery-for-sql-output/) to a SQL output when a SQL table has a unique constraint defined.
69+
When you configure an Azure SQL database as output to a Stream Analytics job, it bulk inserts records into the destination table. In general, Azure Stream Analytics guarantees [at-least-once delivery](https://docs.microsoft.com/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics) to the output sink. You can still [achieve exactly-once delivery]( https://blogs.msdn.microsoft.com/streamanalytics/2017/01/13/how-to-achieve-exactly-once-delivery-for-sql-output/) to a SQL output when a SQL table has a unique constraint defined.
7270

73-
When you set up unique key constraints on the SQL table, Stream Analytics removes duplicate records. It splits the data into batches and recursively inserts the batches until a single duplicate record is found. The split and insert process ignores the duplicates one at a time. For a streaming job that has a considerable number of duplicate rows, the process is inefficient and time-consuming. If you see multiple key violation warning messages in your Activity log for the previous hour, its likely your SQL output is slowing down the entire job.
71+
When you set up unique key constraints on the SQL table, Azure Stream Analytics removes duplicate records. It splits the data into batches and recursively inserts the batches until a single duplicate record is found. The split and insert process ignores the duplicates one at a time. For a streaming job that has many duplicate rows, the process is inefficient and time-consuming. If you see multiple key violation warning messages in your Activity log for the previous hour, it's likely that your SQL output is slowing down the entire job.
7472

75-
To resolve this issue, [configure the index]( https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql) causing the key violation by enabling the IGNORE_DUP_KEY option. This option allows SQL to ignore duplicate values during bulk inserts. Azure SQL Database simply produces a warning message instead of an error. Stream Analytics doesn't produce primary key violation errors anymore.
73+
To resolve this issue, [configure the index]( https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql) that's causing the key violation by enabling the IGNORE_DUP_KEY option. This option allows SQL to ignore duplicate values during bulk inserts. Azure SQL Database simply produces a warning message instead of an error. Azure Stream Analytics doesn't produce primary key violation errors anymore.
7674

77-
Note the following observations when configuring the IGNORE_DUP_KEY option for several types of indexes:
75+
Note the following observations when configuring IGNORE_DUP_KEY for several types of indexes:
7876

79-
* You can't set the IGNORE_DUP_KEY option on a primary key or a unique constraint that uses ALTER INDEX. You need to drop the index and recreate it.
80-
* You can't set the IGNORE_DUP_KEY option using ALTER INDEX for a unique index. This instance is different from a PRIMARY KEY/UNIQUE constraint and is created using a CREATE INDEX or INDEX definition.
81-
* The IGNORE_DUP_KEY option doesnt apply to column store indexes because you cant enforce uniqueness on them.
77+
* You can't set IGNORE_DUP_KEY on a primary key or a unique constraint that uses ALTER INDEX. You need to drop the index and recreate it.
78+
* You can set IGNORE_DUP_KEY by using ALTER INDEX for a unique index. This instance is different from a PRIMARY KEY/UNIQUE constraint and is created by using a CREATE INDEX or INDEX definition.
79+
* The IGNORE_DUP_KEY option doesn't apply to column store indexes because you can't enforce uniqueness on them.
8280

83-
## Column names are lowercase in Stream Analytics (1.0)
81+
## Column names are lowercase in Azure Stream Analytics (1.0)
8482

85-
When using the original compatibility level (1.0), Stream Analytics changes column names to lowercase. This behavior was fixed in later compatibility levels. To preserve the case, move to compatibility level 1.1 or later. For more information, see [Compatibility level for Stream Analytics jobs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-compatibility-level).
83+
When using the original compatibility level (1.0), Azure Stream Analytics changes column names to lowercase. This behavior was fixed in later compatibility levels. To preserve the case, move to compatibility level 1.1 or later. For more information, see [Compatibility level for Stream Analytics jobs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-compatibility-level).
8684

8785
## Get help
8886

89-
For further assistance, try our [Stream Analytics forum](https://social.msdn.microsoft.com/Forums/azure/home?forum=AzureStreamAnalytics).
87+
For further assistance, try our [Azure Stream Analytics forum](https://social.msdn.microsoft.com/Forums/azure/home?forum=AzureStreamAnalytics).
9088

9189
## Next steps
9290

93-
* [Introduction to Stream Analytics](stream-analytics-introduction.md)
94-
* [Get started using Stream Analytics](stream-analytics-real-time-fraud-detection.md)
95-
* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
96-
* [Stream Analytics Query Language Reference](https://docs.microsoft.com/stream-analytics-query/stream-analytics-query-language-reference)
97-
* [Stream Analytics Management REST API Reference](https://msdn.microsoft.com/library/azure/dn835031.aspx)
91+
* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
92+
* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
93+
* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
94+
* [Azure Stream Analytics query language reference](https://docs.microsoft.com/stream-analytics-query/stream-analytics-query-language-reference)
95+
* [Azure Stream Analytics management REST API reference](https://msdn.microsoft.com/library/azure/dn835031.aspx)

0 commit comments

Comments
 (0)