Skip to content

Commit 73e7dc9

Browse files
committed
edit pass: stream-analytics-troubleshoot-output
1 parent f8490f3 commit 73e7dc9

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

articles/stream-analytics/stream-analytics-troubleshoot-output.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ This article describes common issues with Azure Stream Analytics output connecti
1616

1717
## The job doesn't produce output
1818

19-
If the job doesn't produce outputs, verify connectivity:
19+
If the job doesn't produce output, verify connectivity:
2020

2121
1. Verify connectivity to outputs using the **Test Connection** button for each output.
2222
1. Look at [Monitoring metrics](stream-analytics-monitoring.md) on the **Monitor** tab. Because the values are aggregated, the metrics are delayed by a few minutes.
2323

24-
* If Input Events are greater than zero, the job can read the input data. If Input Events are not greater than zero, there is an issue with the job's input. See [Troubleshoot input connections](stream-analytics-troubleshoot-input.md) for more information.
25-
* If Data Conversion Errors are greater than zero and climbing, see [Azure Stream Analytics data errors](data-errors.md) for detailed information about data conversion errors.
26-
* If Runtime Errors are greater than zero, your job can receive data but it's generating errors while processing the query. To find the errors, go to the [audit logs](../azure-resource-manager/management/view-activity-logs.md), and then filter on the *Failed* status.
27-
* If InputEvents is greater than zero and OutputEvents equals zero, one of the following is true:
24+
* If the **Input Events** value is greater than zero, the job can read the input data. If the **Input Events** value isn't greater than zero, there's an issue with the job's input. See [Troubleshoot input connections](stream-analytics-troubleshoot-input.md) for more information.
25+
* If the **Data Conversion Errors** value is greater than zero and climbing, see [Azure Stream Analytics data errors](data-errors.md) for detailed information about data conversion errors.
26+
* If the **Runtime Errors** value is greater than zero, your job receives data but generates errors while processing the query. To find the errors, go to the [audit logs](../azure-resource-manager/management/view-activity-logs.md), and then filter on the **Failed** status.
27+
* If the **Input Events** value is greater than zero and the **Output Events** value equals zero, one of the following statements is true:
2828
* The query processing resulted in zero output events.
2929
* Events or fields might be malformed, resulting in a zero output after the query processing.
3030
* The job was unable to push data to the output sink for connectivity or authentication reasons.
@@ -45,13 +45,13 @@ These factors impact the timeliness of the first output:
4545

4646
* The use of windowed aggregates, such as a GROUP BY clause of tumbling, hopping, and sliding windows:
4747

48-
* For tumbling or hopping window aggregates, the results are generated at the end of the window timeframe.
49-
* For a sliding window, the results are generated when an event enters or exits the sliding window.
50-
* If you are planning to use a large window size, such as more than one hour, it’s best to choose a hopping or sliding window. These window types let you see the output more frequently.
48+
* For tumbling or hopping window aggregates, the results generate at the end of the window timeframe.
49+
* For a sliding window, the results generate when an event enters or exits the sliding window.
50+
* If you're planning to use a large window size, such as more than one hour, it’s best to choose a hopping or sliding window. These window types let you see the output more frequently.
5151

5252
* The use of temporal joins, such as JOIN with DATEDIFF:
5353
* Matches generate as soon as both sides of the matched events arrive.
54-
* Data that lacks a match, like LEFT OUTER JOIN, generates at the end of the DATEDIFF window, with respect to each event on the left side.
54+
* Data that lacks a match, like LEFT OUTER JOIN, generates at the end of the DATEDIFF window, for each event on the left side.
5555

5656
* The use of temporal analytic functions, such as ISFIRST, LAST, and LAG with LIMIT DURATION:
5757
* For analytic functions, the output generates for every event. There is no delay.
@@ -64,25 +64,25 @@ During the normal operation of a job, the output might have longer and longer pe
6464
* Whether the upstream source is throttled.
6565
* Whether the processing logic in the query is compute-intensive.
6666

67-
To see the output details, select the streaming job in the Azure portal, and then select the **Job diagram**. For each input, there is a per partition backlog event metric. If the metric keeps increasing, it’s an indicator that the system resources are constrained. The increase is potentially due to output sink throttling, or high CPU usage. For more information, see [Data-driven debugging by using the job diagram](stream-analytics-job-diagram-with-metrics.md).
67+
To see the output details, select the streaming job in the Azure portal, and then select **Job diagram**. For each input, there's a per partition backlog event metric. If the metric keeps increasing, it’s an indicator that the system resources are constrained. The increase is potentially due to output sink throttling, or high CPU usage. For more information, see [Data-driven debugging by using the job diagram](stream-analytics-job-diagram-with-metrics.md).
6868

69-
## Key violation warning in Azure SQL Database output
69+
## Key violation warning with Azure SQL Database output
7070

7171
When you configure an Azure SQL Database as output to a Stream Analytics job, it bulk inserts records into the destination table. In general, Stream Analytics guarantees [at least once delivery](https://docs.microsoft.com/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics) to the output sink. You can still [achieve exactly-once delivery]( https://blogs.msdn.microsoft.com/streamanalytics/2017/01/13/how-to-achieve-exactly-once-delivery-for-sql-output/) to a SQL output when a SQL table has a unique constraint defined.
7272

73-
When unique key constraints are set up on the SQL table and duplicate records are inserted into the SQL table, Stream Analytics removes the duplicate records. It splits the data into batches and recursively inserts the batches until a single duplicate record is found. The split and insert process ignores the duplicates one at a time. For a streaming job that has a considerable number of duplicate rows, the process is inefficient and time-consuming. If you see multiple key violation warning messages in your Activity log for the previous hour, it’s likely that your SQL output is slowing down the entire job.
73+
When you set up unique key constraints on the SQL table, Stream Analytics removes duplicate records. It splits the data into batches and recursively inserts the batches until a single duplicate record is found. The split and insert process ignores the duplicates one at a time. For a streaming job that has a considerable number of duplicate rows, the process is inefficient and time-consuming. If you see multiple key violation warning messages in your Activity log for the previous hour, it’s likely your SQL output is slowing down the entire job.
7474

7575
To resolve this issue, [configure the index]( https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql) causing the key violation by enabling the IGNORE_DUP_KEY option. This option allows SQL to ignore duplicate values during bulk inserts. Azure SQL Database simply produces a warning message instead of an error. Stream Analytics doesn't produce primary key violation errors anymore.
7676

77-
Note the following observations when configuring the IGNORE_DUP_KEY for several types of indexes:
77+
Note the following observations when configuring the IGNORE_DUP_KEY option for several types of indexes:
7878

79-
* You can't set the IGNORE_DUP_KEY on a primary key or a unique constraint that uses ALTER INDEX. You need to drop and recreate the index.
80-
* You can't set the IGNORE_DUP_KEY option using ALTER INDEX for a unique index. This is different from a PRIMARY KEY/UNIQUE constraint and is created using a CREATE INDEX or INDEX definition.
81-
* The IGNORE_DUP_KEY doesn’t apply to column store indexes because you can’t enforce uniqueness on them.
79+
* You can't set the IGNORE_DUP_KEY option on a primary key or a unique constraint that uses ALTER INDEX. You need to drop the index and recreate it.
80+
* You can't set the IGNORE_DUP_KEY option using ALTER INDEX for a unique index. This instance is different from a PRIMARY KEY/UNIQUE constraint and is created using a CREATE INDEX or INDEX definition.
81+
* The IGNORE_DUP_KEY option doesn’t apply to column store indexes because you can’t enforce uniqueness on them.
8282

83-
## Column names are lower-case in Stream Analytics (1.0)
83+
## Column names are lowercase in Stream Analytics (1.0)
8484

85-
When using the original compatibility level (1.0), Stream Analytics changes column names to lower-case. This behavior was fixed in later compatibility levels. To preserve the case, move to compatibility level 1.1 or later. See [Compatibility level for Stream Analytics jobs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-compatibility-level) for more information.
85+
When using the original compatibility level (1.0), Stream Analytics changes column names to lowercase. This behavior was fixed in later compatibility levels. To preserve the case, move to compatibility level 1.1 or later. For more information, see [Compatibility level for Stream Analytics jobs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-compatibility-level).
8686

8787
## Get help
8888

0 commit comments

Comments
 (0)