You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/event-grid/mqtt-automotive-connectivity-and-data-solution.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ Direct messages are executed with the minimum amount of hops for the best possib
81
81
1.**Event Grid** checks for authorization for the Companion app Service to determine if it can send messages to the provided topics.
82
82
1. Companion app subscribes to responses from the specific vehicle / command combination.
83
83
84
-
In the case of vehicle state-dependent commands that require user consent **(B)**:
84
+
When vehicle state-dependent commands require user consent **(B)**:
85
85
86
86
1. The vehicle owner / user provides consent for the execution of command and control functions to a **digital service** (in this example a companion app). This is normally done when the user downloads/activate the app and the OEM activates their account. This triggers a configuration change on the vehicle to subscribe to the associated command topic in the MQTT broker.
87
87
2. The **companion app** uses the command and control managed API to request execution of a remote command.
@@ -102,7 +102,7 @@ This dataflow covers the process to register and provision vehicles and devices
102
102
103
103
:::image type="content" source="media/mqtt-automotive-connectivity-and-data-solution/provisioning-dataflow.png" alt-text="Diagram of the provisioning dataflow." border="false" lightbox="media/mqtt-automotive-connectivity-and-data-solution/provisioning-dataflow.png":::
104
104
105
-
1. The **Factory System** commissions the vehicle device to the desired construction state. This may include firmware & software initial installation and configuration. As part of this process, the factory system will obtain and write the device *certificate*, created from the **Public Key Infrastructure** provider.
105
+
1. The **Factory System** commissions the vehicle device to the desired construction state. This can include firmware & software initial installation and configuration. As part of this process, the factory system will obtain and write the device *certificate*, created from the **Public Key Infrastructure** provider.
106
106
1. The **Factory System** registers the vehicle & device using the *Vehicle & Device Provisioning API*.
107
107
1. The factory system triggers the **device provisioning client** to connect to the *device registration* and provision the device. The device retrieves connection information to the *MQTT broker*.
108
108
1. The *device registration* application creates the device identity with MQTT broker.
@@ -161,7 +161,7 @@ Each *vehicle messaging scale unit* supports a defined vehicle population (for e
161
161
*[Azure Functions](../azure-functions/functions-overview.md) processes the vehicle messages. It can also be used to implement management APIs that require short-lived execution.
162
162
*[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) is an alternative when the functionality behind the Managed APIs consists of complex workloads deployed as containerized applications.
163
163
*[Azure Cosmos DB](../cosmos-db/introduction.md) stores the vehicle, device and user consent settings.
164
-
*[Azure API Management](../azure/api-management/api-management-key-concepts.md) provides a managed API gateway to existing back-end services such as vehicle lifecycle management (including OTA) and user consent management.
164
+
*[Azure API Management](../api-management/api-management-key-concepts.md) provides a managed API gateway to existing back-end services such as vehicle lifecycle management (including OTA) and user consent management.
165
165
*[Azure Batch](../batch/batch-technical-overview.md) runs large compute-intensive tasks efficiently, such as vehicle communication trace ingestion.
Copy file name to clipboardExpand all lines: articles/stream-analytics/includes/resource-logs-schema.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,5 +59,5 @@ Message| Log message.
59
59
Type | Type of message. Maps to internal categorization of errors. For example, **JobValidationError** or **BlobOutputAdapterInitializationFailure**.
60
60
Correlation ID | GUID that uniquely identifies the job execution. All execution log entries from the time the job starts until the job stops have the same **Correlation ID** value.
61
61
62
-
For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/platform/resource-logs-schema.md) or [all the resource log category types collected for Azure Stream Analytics](../monitor-azure-stream-analytics-reference.md#resource-logs).
62
+
For reference, see a list of [all resource logs category types supported in Azure Monitor](../../azure-monitor/platform/resource-logs-schema.md) or [all the resource log category types collected for Azure Stream Analytics](../monitor-azure-stream-analytics-reference.md#resource-logs).
Copy file name to clipboardExpand all lines: articles/stream-analytics/sql-database-output.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.date: 07/21/2022
12
12
13
13
You can use [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) as an output for data that's relational in nature or for applications that depend on content being hosted in a relational database. Azure Stream Analytics jobs write to an existing table in SQL Database. The table schema must exactly match the fields and their types in your job's output. The Azure portal experience for Stream Analytics allows you to [test your streaming query and also detect if there are any mismatches between the schema](sql-db-table.md) of the results produced by your job and the schema of the target table in your SQL database. To learn about ways to improve write throughput, see the [Stream Analytics with Azure SQL Database as output](stream-analytics-sql-output-perf.md) article. While you can also specify [Azure Synapse Analytics SQL pool](../synapse-analytics/overview-what-is.md) as an output via the SQL Database output option, it is recommended to use the dedicated [Azure Synapse Analytics output connector](azure-synapse-analytics-output.md) for best performance.
14
14
15
-
You can also use [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) as an output. You have to [configure public endpoint in SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md) and then manually configure the following settings in Azure Stream Analytics. Azure virtual machine running SQL Server with a database attached is also supported by manually configuring the settings below.
15
+
You can also use [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview.md) as an output. You have to [configure public endpoint in SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure.md) and then manually configure the following settings in Azure Stream Analytics. Azure virtual machine running SQL Server with a database attached is also supported by manually configuring the settings below.
16
16
17
17
## Output configuration
18
18
@@ -41,7 +41,7 @@ Partitioning needs to enabled and is based on the PARTITION BY clause in the que
41
41
42
42
## Output batch size
43
43
44
-
You can configure the max message size by using **Max batch count**. The default maximum is 10,000 and the default minimum is 100 rows per single bulk insert. For more information, see [Azure SQL limits](../azure-sql/database/resource-limits-logical-server.md). Every batch is initially bulk inserted with maximum batch count. Batch is split in half (until minimum batch count) based on retryable errors from SQL.
44
+
You can configure the max message size by using **Max batch count**. The default maximum is 10,000 and the default minimum is 100 rows per single bulk insert. For more information, see [Azure SQL limits](/azure/azure-sql/database/resource-limits-logical-server.md). Every batch is initially bulk inserted with maximum batch count. Batch is split in half (until minimum batch count) based on retryable errors from SQL.
0 commit comments