Skip to content

Commit 106ed74

Browse files
authored
Merge pull request #8010 from MikeWasson/patch-2
Fix note formatting
2 parents dbd2d7a + 69555a3 commit 106ed74

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

articles/stream-analytics/stream-analytics-scale-jobs.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,8 @@ If your query is inherently fully parallelizable across input partitions, you ca
2626
- If the issue is due to sink throttling, you may need to increase the number of output partitions (and also input partitions to keep the job fully parallelizable), or increase the amount of resources of the sink (for example number of Request Units for CosmosDB).
2727
- In job diagram, there is a per partition backlog event metric for each input. If the backlog event metric keeps increasing, it’s also an indicator that the system resource is constrained (either because of output sink throttling, or high CPU).
2828
4. Once you have determined the limits of what a 6 SU job can reach, you can extrapolate linearly the processing capacity of the job as you add more SUs, assuming you don’t have any data skew that makes certain partition “hot.”
29-
>[!Note]
29+
30+
> [!NOTE]
3031
> Choose the right number of Streaming Units:
3132
> Because Stream Analytics creates a processing node for each 6 SU added, it’s best to make the number of nodes a divisor of the number of input partitions, so the partitions can be evenly distributed across the nodes.
3233
For example, you have measured your 6 SU job can achieve 4 MB/s processing rate, and your input partition count is 4. You can choose to run your job with 12 SU to achieve roughly 8 MB/s processing rate, or 24 SU to achieve 16 MB/s. You can then decide when to increase SU number for the job to what value, as a function of your input rate.

0 commit comments

Comments
 (0)