Skip to content

Commit fca9327

Browse files
Merge pull request #303672 from silasmendes-ms/serverless-sql-know-issue-log-analytics
Added known issue: Long queries may not appear in Log Analytics for serverless SQL pool
2 parents 57eab6a + 451a974 commit fca9327

File tree

1 file changed

+30
-18
lines changed

1 file changed

+30
-18
lines changed

articles/synapse-analytics/known-issues.md

Lines changed: 30 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,14 @@ To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
2929
|Azure Synapse dedicated SQL pool|[Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'](#query-failure-when-ingesting-a-parquet-file-into-a-table-with-auto_create_tableon)|Has workaround|
3030
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has workaround|
3131
|Azure Synapse dedicated SQL pool|[UPDATE STATISTICS statement fails with error: "The provided statistics stream is corrupt."](#update-statistics-failure)|Has workaround|
32-
|Azure Synapse dedicated SQL pool|[Enable TDE gateway time-outs in ARM deployment](#enable-tde-gateway-time-outs-in-arm-deployment)|Has workaround|
32+
|Azure Synapse dedicated SQL pool|[Enable TDE gateway timeouts in ARM deployment](#enable-tde-gateway-timeouts-in-arm-deployment)|Has workaround|
3333
|Azure Synapse serverless SQL pool|[Query failures from serverless SQL pool to Azure Cosmos DB analytical store](#query-failures-from-serverless-sql-pool-to-azure-cosmos-db-analytical-store)|Has workaround|
3434
|Azure Synapse serverless SQL pool|[Azure Cosmos DB analytical store view propagates wrong attributes in the column](#azure-cosmos-db-analytical-store-view-propagates-wrong-attributes-in-the-column)|Has workaround|
3535
|Azure Synapse serverless SQL pool|[Query failures in serverless SQL pools](#query-failures-in-serverless-sql-pools)|Has workaround|
3636
|Azure Synapse serverless SQL pool|[Storage access issues due to authorization header being too long](#storage-access-issues-due-to-authorization-header-being-too-long)|Has workaround|
3737
|Azure Synapse serverless SQL pool|[Querying a view shows unexpected results](#querying-a-view-shows-unexpected-results)|Has workaround|
38-
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has workaround|
38+
|Azure Synapse serverless SQL pool|[Queries longer than 7,500 characters may not appear in Log Analytics](#queries-longer-than-7500-characters-may-not-appear-in-log-analytics)|Has workaround|
39+
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) isn't getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has workaround|
3940
|Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has workaround|
4041
|Azure Synapse Workspace|[REST API PUT operations or ARM/Bicep templates to update network settings fail](#rest-api-put-operations-or-armbicep-templates-to-update-network-settings-fail)|Has workaround|
4142
|Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has workaround|
@@ -47,7 +48,7 @@ To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
4748

4849
### Data Factory copy command fails with error "The request could not be performed because of an I/O device error"
4950

50-
Azure Data Factory pipelines use the `COPY INTO` Transact-SQL statement to ingest data at scale into dedicated SQL pool tables. In some rare cases, the `COPY INTO` statement can fail when loading CSV files into dedicated SQL pool table when file split is used in an Azure Data Factory pipeline. File splitting is a mechanism that improves load performance when a small number of larger (1 GB+) files are loaded in a single copy task. When file splitting is enabled, a single file can be loaded by multiple parallel threads, where every thread is assigned a part of the file.
51+
Azure Data Factory pipelines use the `COPY INTO` Transact-SQL statement to ingest data at scale into dedicated SQL pool tables. In some rare cases, the `COPY INTO` statement can fail when loading CSV files into dedicated SQL pool table when file split is used in an Azure Data Factory pipeline. File splitting is a mechanism that improves load performance when a few larger (1 GB+) files are loaded in a single copy task. When file splitting is enabled, multiple parallel threads can load a single file, where each thread processes a part of the file.
5152

5253
**Workaround**: Impacted customers should disable file split in Azure Data Factory.
5354

@@ -61,15 +62,15 @@ When using `COPY INTO` command with a managed identity, the statement can fail a
6162

6263
An internal upgrade of our telemetry emission logic, which was meant to enhance the performance and reliability of our telemetry data, caused an unexpected issue that affected some customers' ability to monitor their dedicated SQL pool, `tempdb`, and Data Warehouse Data IO metrics.
6364

64-
**Workaround**: Upon identifying the issue, our team took action to identify the root cause and update the configuration in our system. Customers can fix the issue by pausing and resuming their instance, which will restore the normal state of the instance and the telemetry data flow.
65+
**Workaround**: Upon identifying the issue, our team took action to identify the root cause and update the configuration in our system. Customers can fix the issue by pausing and resuming their instance, which restores the normal state of the instance and the telemetry data flow.
6566

6667
### Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'
6768

6869
Customers who try to ingest a parquet file into a hash distributed table with `AUTO_CREATE_TABLE='ON'` can receive the following error:
6970

7071
`COPY statement using Parquet and auto create table enabled currently cannot load into hash-distributed tables`
7172

72-
[Ingestion into an auto-created hash-distributed table using AUTO_CREATE_TABLE is unsupported](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#auto_create_table---on--off-). Customers that have previously loaded using this unsupported scenario should CTAS their data into a new table and use it in place of the old table.
73+
[Ingestion into an auto-created hash-distributed table using AUTO_CREATE_TABLE is unsupported](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#auto_create_table---on--off-). Customers who previously loaded data using this unsupported scenario should use CREATE TABLE AS SELECT (CTAS) to copy the data into a new table and replace the old table.
7374

7475
### Queries failing with Data Exfiltration Error
7576

@@ -85,14 +86,14 @@ Some dedicated SQL Pools can encounter an exception when executing an `UPDATE ST
8586

8687
When a new constraint is added to a table, a related statistic is created in the distributions. If a clustered index is also created on the table, it must include the same columns (in the same order) as the constraint, otherwise `UPDATE STATISTICS` commands on those columns might fail.
8788

88-
**Workaround**: Identify if a constraint and clustered index exist on the table. If so, DROP both the constraint and clustered index. After that, recreate the clustered index and then the constraint *ensuring that both include the same columns in the same order.* If the table does not have a constraint and clustered index, or if the above step results in the same error, contact the Microsoft Support Team for assistance.
89+
**Workaround**: Identify if a constraint and clustered index exist on the table. If so, DROP both the constraint and clustered index. After that, recreate the clustered index and then the constraint *ensuring that both include the same columns in the same order.* If the table doesn't have a constraint and clustered index, or if the step results in the same error, contact the Microsoft Support Team for assistance.
8990

90-
### Enable TDE gateway time-outs in ARM deployment
91+
### Enable TDE gateway timeouts in ARM deployment
9192

92-
Updating TDE (Transparent data encryption) is internally implemented as a synchronous operation, subject to a time-out which can be exceeded. Although the time-out was exceeded, behind the scenes the TDE operation in most cases succeeds, but causes successor operations in the ARM template to be rejected.
93+
Updating TDE (Transparent data encryption) is internally implemented as a synchronous operation, subject to a timeout, which can be exceeded. Although the timeout was exceeded, behind the scenes the TDE operation in most cases succeeds, but causes successor operations in the ARM template to be rejected.
9394

9495
**Workaround**: There are two ways to mitigate this issue. The preferred option is to split the ARM template into multiple templates, so that one of the templates contains TDE update. That action reduces the chance of a time-out.
95-
Other option is to retry the deployment after several minutes. During the wait time, the TDE update operation most likely will succeed and re-deploying the template the second time could execute previously rejected operations.
96+
Other option is to retry the deployment after several minutes. During the wait time, the TDE update operation most likely succeeds and re-deploying the template the second time could execute previously rejected operations.
9697

9798
### Tag updates appear to fail
9899

@@ -126,7 +127,7 @@ When using an ARM template, Bicep file, or direct REST API PUT operation to chan
126127

127128
### Known issue incorporating square brackets [] in the value of Tags
128129

129-
In the context of updating tag values within an Azure Synapse workspace, the inclusion of square brackets (`[]`) will result in an unsuccessful update operation.
130+
In the context of updating tag values within an Azure Synapse workspace, the inclusion of square brackets (`[]`) results in an unsuccessful update operation.
130131

131132
**Workaround**: The current workaround is to abstain from using the square brackets (`[]`) in Azure Synapse workspace tag values.
132133

@@ -139,28 +140,28 @@ The error message displayed is `Action failed - Error: Orchestrate failed - Synt
139140
**Workaround**: Following actions can be taken as quick mitigation:
140141

141142
- **Remove escape characters:** Manually remove any escape characters (`\`) from the parameters file before deployment. This means editing the file to eliminate these characters that could be causing issues during the parsing or processing stage of the deployment.
142-
- **Replace escape characters with Forward Slashes:** Replace the escape characters (`\`) with forward slashes (`/`). This can be particularly useful in file paths, where many systems accept forward slashes as valid path separators. This replacement might help in bypassing the problem with escape characters, allowing the deployment process to succeed.
143+
- **Replace escape characters with Forward Slashes:** Replace the escape characters (`\`) with forward slashes (`/`). This replacement can be useful in file paths, where many systems accept forward slashes as valid path separators. This replacement might help in bypassing the problem with escape characters, allowing the deployment process to succeed.
143144

144-
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
145+
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This step might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
145146

146147
### No 'GET' API operation dedicated to the "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled" setting
147148

148-
**Issue Summary:** In Azure Synapse Analytics, there is no dedicated 'GET' API operation for retrieving the state of the "trustedServiceBypassEnabled" setting at the resource scope "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled". While users can set this configuration, they cannot directly retrieve its state via this specific resource scope.
149+
**Issue Summary:** In Azure Synapse Analytics, there's no dedicated 'GET' API operation for retrieving the state of the "trustedServiceBypassEnabled" setting at the resource scope "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled". While users can set this configuration, they can't directly retrieve its state via this specific resource scope.
149150

150-
**Impact:** This limitation impacts Azure Policy definitions, as they cannot enforce a specific state for the "trustedServiceBypassEnabled" setting. Customers are unable to use Azure Policy to deny or manage this configuration.
151+
**Impact:** This limitation impacts Azure Policy definitions, as they can't enforce a specific state for the "trustedServiceBypassEnabled" setting. Customers are unable to use Azure Policy to deny or manage this configuration.
151152

152-
**Workaround:** There is no workaround available in Azure Policy to enforce the desired configuration state for this property. However, users can use the 'GET' workspace operation to audit the configuration state for reporting purposes.\
153+
**Workaround:** There's no workaround available in Azure Policy to enforce the desired configuration state for this property. However, users can use the 'GET' workspace operation to audit the configuration state for reporting purposes.\
153154
This 'GET' workspace operation maps to the 'Microsoft.Synapse/workspaces/trustedServiceBypassEnabled' Azure Policy Alias.
154155

155156
The Azure Policy Alias can be used for managing this property with a Deny Azure Policy Effect if the operation is a PUT request against the Microsoft.Synapse/workspace resource, but it will only function for Audit purposes if the PUT request is being sent directly to the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource. The parent resource has a property [properties.trustedServiceBypassEnabled] that maps the configuration from the child resource and this is why it can still be audited through the parent resource’s Azure Policy Alias.
156157

157-
Since the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource has no GET operation available, Azure Policy cannot manage these requests, and Azure Policy cannot generate an Azure Policy Alias for it.
158+
Since the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource has no GET operation available, Azure Policy can't manage these requests, and Azure Policy can't generate an Azure Policy Alias for it.
158159

159160
**Parent Resource:** Microsoft.Synapse/workspaces
160161

161162
**Child Resource:** Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration
162163

163-
The Azure portal makes the PUT request directly to the PUT API for the child resource and therefore the Azure portal, along with any other API requests made outside of the parent Microsoft.Synapse/workspaces APIs, cannot be managed by Azure Policy through a Deny or other actionable Azure Policy Effect.
164+
The Azure portal makes the PUT request directly to the PUT API for the child resource and therefore the Azure portal, along with any other API requests made outside of the parent Microsoft.Synapse/workspaces APIs, can't be managed by Azure Policy through a Deny or other actionable Azure Policy Effect.
164165

165166
## Azure Synapse Analytics serverless SQL pool active known issues summary
166167

@@ -253,6 +254,17 @@ When you query the view for which the underlying schema has changed after the vi
253254

254255
**Workaround**: Manually adjust the view definition.
255256

257+
### Queries longer than 7,500 characters may not appear in Log Analytics
258+
259+
Queries that exceed 7,500 characters in length might not be captured in the `SynapseBuiltinSqlPoolRequestsEnded` table in Log Analytics.
260+
261+
**Workaround**:
262+
263+
Suggested workarounds are:
264+
265+
- Use the `sys.dm_exec_requests_history` view in your Synapse Serverless SQL pool to access historical query execution details.
266+
- Refactor the query to reduce its length below 7,500 characters, if feasible.
267+
256268

257269
## Recently closed known issues
258270

@@ -270,7 +282,7 @@ When you query the view for which the underlying schema has changed after the vi
270282

271283
### Queries using Microsoft Entra authentication fails after 1 hour
272284

273-
SQL connections using Microsoft Entra authentication that remain active for more than 1 hour starts to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS and ADS. Tools that open new connection to execute queries aren't affected, like Synapse Studio.
285+
SQL connections using Microsoft Entra authentication that remain active for more than 1 hour starts to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS (SQL Server Management Studio) and ADS (Azure Data Studio). Tools that open new connection to execute queries aren't affected, like Synapse Studio.
274286

275287
**Status**: Resolved
276288

0 commit comments

Comments
 (0)