Skip to content

Commit b4fa21e

Browse files
authored
Update stream-analytics-resource-model.md
1 parent 3e4be60 commit b4fa21e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/stream-analytics/stream-analytics-resource-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,10 @@ A job can have one or more inputs to continuously read data from. These streamin
2525
A job can have one or more outputs to continuously write data to. Stream Analytics supports 12 different output sinks including Azure SQL Database, Azure Data Lake Storage, Azure Cosmos DB, Power BI and more. Adding these outputs to your job is also a zero-code operation.
2626

2727
### Query
28-
You can implement your stream processing logic by writing a SQL query in your job. The rich SQL language support allows you to tackle scenarios such as parsing complex JSON, filtering values, computing aggregates, performing joins, and even more advanced use cases such as geospatial analytics and anomaly detection. You can also extend this SQL language with JavaScript user-defined-functions (UDF) and user-defined-aggregates (UDA). Stream Analytics also allows you to easily adjust for late and out-of-order events through simple configurations in you job's settings. You can also choose to execute your query based on input event's arrival time at the input source or when the event was generated at the event source.
28+
You can implement your stream processing logic by writing a SQL query in your job. The rich SQL language support allows you to tackle scenarios such as parsing complex JSON, filtering values, computing aggregates, performing joins, and even more advanced use cases such as geospatial analytics and anomaly detection. You can also extend this SQL language with JavaScript user-defined-functions (UDF) and user-defined-aggregates (UDA). Stream Analytics also allows you to easily adjust for late and out-of-order events through simple configurations in your job's settings. You can also choose to execute your query based on input event's arrival time at the input source or when the event was generated at the event source.
2929

3030
## Running a job
31-
Once you have developed your job by configuring inputs, output and a query, you can start your job by specifying number of Streaming Units. Once your job has started, it goes into a **Running** state and will stay in that state until explicitly stopped or it runs into a unrecoverable failure. When the job is in a running state, it continuously pulls data from your input sources, executes the query logic which produces results that get written to your output sinks with milliseconds end-to-end latency.
31+
Once you have developed your job by configuring inputs, output and a query, you can start your job by specifying number of Streaming Units. Once your job has started, it goes into a **Running** state and will stay in that state until explicitly stopped or it runs into an unrecoverable failure. When the job is in a running state, it continuously pulls data from your input sources, executes the query logic which produces results that get written to your output sinks with milliseconds end-to-end latency.
3232

3333
When your job is started, the Stream Analytics service takes care of compiling your query and assigns certain amount of compute and memory based on the number of Streaming Units configured in your job. You don't have to worry about any underlying infrastructure as cluster maintenance, security patches as that is automatically taken care by the platform. When running jobs in the Standard SKU, you are charged for the Streaming Units only when the job runs.
3434

0 commit comments

Comments
 (0)