You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/known-issues.md
+30-18Lines changed: 30 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,13 +29,14 @@ To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
29
29
|Azure Synapse dedicated SQL pool|[Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'](#query-failure-when-ingesting-a-parquet-file-into-a-table-with-auto_create_tableon)|Has workaround|
30
30
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has workaround|
31
31
|Azure Synapse dedicated SQL pool|[UPDATE STATISTICS statement fails with error: "The provided statistics stream is corrupt."](#update-statistics-failure)|Has workaround|
32
-
|Azure Synapse dedicated SQL pool|[Enable TDE gateway time-outs in ARM deployment](#enable-tde-gateway-time-outs-in-arm-deployment)|Has workaround|
32
+
|Azure Synapse dedicated SQL pool|[Enable TDE gateway timeouts in ARM deployment](#enable-tde-gateway-timeouts-in-arm-deployment)|Has workaround|
33
33
|Azure Synapse serverless SQL pool|[Query failures from serverless SQL pool to Azure Cosmos DB analytical store](#query-failures-from-serverless-sql-pool-to-azure-cosmos-db-analytical-store)|Has workaround|
34
34
|Azure Synapse serverless SQL pool|[Azure Cosmos DB analytical store view propagates wrong attributes in the column](#azure-cosmos-db-analytical-store-view-propagates-wrong-attributes-in-the-column)|Has workaround|
|Azure Synapse serverless SQL pool|[Storage access issues due to authorization header being too long](#storage-access-issues-due-to-authorization-header-being-too-long)|Has workaround|
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has workaround|
38
+
|Azure Synapse serverless SQL pool|[Queries longer than 7,500 characters may not appear in Log Analytics](#queries-longer-than-7500-characters-may-not-appear-in-log-analytics)|Has workaround|
39
+
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) isn't getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has workaround|
39
40
|Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has workaround|
40
41
|Azure Synapse Workspace|[REST API PUT operations or ARM/Bicep templates to update network settings fail](#rest-api-put-operations-or-armbicep-templates-to-update-network-settings-fail)|Has workaround|
41
42
|Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has workaround|
@@ -47,7 +48,7 @@ To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
47
48
48
49
### Data Factory copy command fails with error "The request could not be performed because of an I/O device error"
49
50
50
-
Azure Data Factory pipelines use the `COPY INTO` Transact-SQL statement to ingest data at scale into dedicated SQL pool tables. In some rare cases, the `COPY INTO` statement can fail when loading CSV files into dedicated SQL pool table when file split is used in an Azure Data Factory pipeline. File splitting is a mechanism that improves load performance when a small number of larger (1 GB+) files are loaded in a single copy task. When file splitting is enabled, a single file can be loaded by multiple parallel threads, where every thread is assigned a part of the file.
51
+
Azure Data Factory pipelines use the `COPY INTO` Transact-SQL statement to ingest data at scale into dedicated SQL pool tables. In some rare cases, the `COPY INTO` statement can fail when loading CSV files into dedicated SQL pool table when file split is used in an Azure Data Factory pipeline. File splitting is a mechanism that improves load performance when a few larger (1 GB+) files are loaded in a single copy task. When file splitting is enabled, multiple parallel threads can load a single file, where each thread processes a part of the file.
51
52
52
53
**Workaround**: Impacted customers should disable file split in Azure Data Factory.
53
54
@@ -61,15 +62,15 @@ When using `COPY INTO` command with a managed identity, the statement can fail a
61
62
62
63
An internal upgrade of our telemetry emission logic, which was meant to enhance the performance and reliability of our telemetry data, caused an unexpected issue that affected some customers' ability to monitor their dedicated SQL pool, `tempdb`, and Data Warehouse Data IO metrics.
63
64
64
-
**Workaround**: Upon identifying the issue, our team took action to identify the root cause and update the configuration in our system. Customers can fix the issue by pausing and resuming their instance, which will restore the normal state of the instance and the telemetry data flow.
65
+
**Workaround**: Upon identifying the issue, our team took action to identify the root cause and update the configuration in our system. Customers can fix the issue by pausing and resuming their instance, which restores the normal state of the instance and the telemetry data flow.
65
66
66
67
### Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'
67
68
68
69
Customers who try to ingest a parquet file into a hash distributed table with `AUTO_CREATE_TABLE='ON'` can receive the following error:
69
70
70
71
`COPY statement using Parquet and auto create table enabled currently cannot load into hash-distributed tables`
71
72
72
-
[Ingestion into an auto-created hash-distributed table using AUTO_CREATE_TABLE is unsupported](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#auto_create_table---on--off-). Customers that have previously loaded using this unsupported scenario should CTAS their data into a new table and use it in place of the old table.
73
+
[Ingestion into an auto-created hash-distributed table using AUTO_CREATE_TABLE is unsupported](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#auto_create_table---on--off-). Customers who previously loaded data using this unsupported scenario should use CREATE TABLE AS SELECT (CTAS) to copy the data into a new table and replace the old table.
73
74
74
75
### Queries failing with Data Exfiltration Error
75
76
@@ -85,14 +86,14 @@ Some dedicated SQL Pools can encounter an exception when executing an `UPDATE ST
85
86
86
87
When a new constraint is added to a table, a related statistic is created in the distributions. If a clustered index is also created on the table, it must include the same columns (in the same order) as the constraint, otherwise `UPDATE STATISTICS` commands on those columns might fail.
87
88
88
-
**Workaround**: Identify if a constraint and clustered index exist on the table. If so, DROP both the constraint and clustered index. After that, recreate the clustered index and then the constraint *ensuring that both include the same columns in the same order.* If the table does not have a constraint and clustered index, or if the above step results in the same error, contact the Microsoft Support Team for assistance.
89
+
**Workaround**: Identify if a constraint and clustered index exist on the table. If so, DROP both the constraint and clustered index. After that, recreate the clustered index and then the constraint *ensuring that both include the same columns in the same order.* If the table doesn't have a constraint and clustered index, or if the step results in the same error, contact the Microsoft Support Team for assistance.
89
90
90
-
### Enable TDE gateway time-outs in ARM deployment
91
+
### Enable TDE gateway timeouts in ARM deployment
91
92
92
-
Updating TDE (Transparent data encryption) is internally implemented as a synchronous operation, subject to a time-out which can be exceeded. Although the time-out was exceeded, behind the scenes the TDE operation in most cases succeeds, but causes successor operations in the ARM template to be rejected.
93
+
Updating TDE (Transparent data encryption) is internally implemented as a synchronous operation, subject to a timeout, which can be exceeded. Although the timeout was exceeded, behind the scenes the TDE operation in most cases succeeds, but causes successor operations in the ARM template to be rejected.
93
94
94
95
**Workaround**: There are two ways to mitigate this issue. The preferred option is to split the ARM template into multiple templates, so that one of the templates contains TDE update. That action reduces the chance of a time-out.
95
-
Other option is to retry the deployment after several minutes. During the wait time, the TDE update operation most likely will succeed and re-deploying the template the second time could execute previously rejected operations.
96
+
Other option is to retry the deployment after several minutes. During the wait time, the TDE update operation most likely succeeds and re-deploying the template the second time could execute previously rejected operations.
96
97
97
98
### Tag updates appear to fail
98
99
@@ -126,7 +127,7 @@ When using an ARM template, Bicep file, or direct REST API PUT operation to chan
126
127
127
128
### Known issue incorporating square brackets [] in the value of Tags
128
129
129
-
In the context of updating tag values within an Azure Synapse workspace, the inclusion of square brackets (`[]`) will result in an unsuccessful update operation.
130
+
In the context of updating tag values within an Azure Synapse workspace, the inclusion of square brackets (`[]`) results in an unsuccessful update operation.
130
131
131
132
**Workaround**: The current workaround is to abstain from using the square brackets (`[]`) in Azure Synapse workspace tag values.
132
133
@@ -139,28 +140,28 @@ The error message displayed is `Action failed - Error: Orchestrate failed - Synt
139
140
**Workaround**: Following actions can be taken as quick mitigation:
140
141
141
142
-**Remove escape characters:** Manually remove any escape characters (`\`) from the parameters file before deployment. This means editing the file to eliminate these characters that could be causing issues during the parsing or processing stage of the deployment.
142
-
-**Replace escape characters with Forward Slashes:** Replace the escape characters (`\`) with forward slashes (`/`). This can be particularly useful in file paths, where many systems accept forward slashes as valid path separators. This replacement might help in bypassing the problem with escape characters, allowing the deployment process to succeed.
143
+
-**Replace escape characters with Forward Slashes:** Replace the escape characters (`\`) with forward slashes (`/`). This replacement can be useful in file paths, where many systems accept forward slashes as valid path separators. This replacement might help in bypassing the problem with escape characters, allowing the deployment process to succeed.
143
144
144
-
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
145
+
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This step might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
145
146
146
147
### No 'GET' API operation dedicated to the "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled" setting
147
148
148
-
**Issue Summary:** In Azure Synapse Analytics, there is no dedicated 'GET' API operation for retrieving the state of the "trustedServiceBypassEnabled" setting at the resource scope "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled". While users can set this configuration, they cannot directly retrieve its state via this specific resource scope.
149
+
**Issue Summary:** In Azure Synapse Analytics, there's no dedicated 'GET' API operation for retrieving the state of the "trustedServiceBypassEnabled" setting at the resource scope "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled". While users can set this configuration, they can't directly retrieve its state via this specific resource scope.
149
150
150
-
**Impact:** This limitation impacts Azure Policy definitions, as they cannot enforce a specific state for the "trustedServiceBypassEnabled" setting. Customers are unable to use Azure Policy to deny or manage this configuration.
151
+
**Impact:** This limitation impacts Azure Policy definitions, as they can't enforce a specific state for the "trustedServiceBypassEnabled" setting. Customers are unable to use Azure Policy to deny or manage this configuration.
151
152
152
-
**Workaround:** There is no workaround available in Azure Policy to enforce the desired configuration state for this property. However, users can use the 'GET' workspace operation to audit the configuration state for reporting purposes.\
153
+
**Workaround:** There's no workaround available in Azure Policy to enforce the desired configuration state for this property. However, users can use the 'GET' workspace operation to audit the configuration state for reporting purposes.\
153
154
This 'GET' workspace operation maps to the 'Microsoft.Synapse/workspaces/trustedServiceBypassEnabled' Azure Policy Alias.
154
155
155
156
The Azure Policy Alias can be used for managing this property with a Deny Azure Policy Effect if the operation is a PUT request against the Microsoft.Synapse/workspace resource, but it will only function for Audit purposes if the PUT request is being sent directly to the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource. The parent resource has a property [properties.trustedServiceBypassEnabled] that maps the configuration from the child resource and this is why it can still be audited through the parent resource’s Azure Policy Alias.
156
157
157
-
Since the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource has no GET operation available, Azure Policy cannot manage these requests, and Azure Policy cannot generate an Azure Policy Alias for it.
158
+
Since the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource has no GET operation available, Azure Policy can't manage these requests, and Azure Policy can't generate an Azure Policy Alias for it.
The Azure portal makes the PUT request directly to the PUT API for the child resource and therefore the Azure portal, along with any other API requests made outside of the parent Microsoft.Synapse/workspaces APIs, cannot be managed by Azure Policy through a Deny or other actionable Azure Policy Effect.
164
+
The Azure portal makes the PUT request directly to the PUT API for the child resource and therefore the Azure portal, along with any other API requests made outside of the parent Microsoft.Synapse/workspaces APIs, can't be managed by Azure Policy through a Deny or other actionable Azure Policy Effect.
164
165
165
166
## Azure Synapse Analytics serverless SQL pool active known issues summary
166
167
@@ -253,6 +254,17 @@ When you query the view for which the underlying schema has changed after the vi
253
254
254
255
**Workaround**: Manually adjust the view definition.
255
256
257
+
### Queries longer than 7,500 characters may not appear in Log Analytics
258
+
259
+
Queries that exceed 7,500 characters in length might not be captured in the `SynapseBuiltinSqlPoolRequestsEnded` table in Log Analytics.
260
+
261
+
**Workaround**:
262
+
263
+
Suggested workarounds are:
264
+
265
+
- Use the `sys.dm_exec_requests_history` view in your Synapse Serverless SQL pool to access historical query execution details.
266
+
- Refactor the query to reduce its length below 7,500 characters, if feasible.
267
+
256
268
257
269
## Recently closed known issues
258
270
@@ -270,7 +282,7 @@ When you query the view for which the underlying schema has changed after the vi
270
282
271
283
### Queries using Microsoft Entra authentication fails after 1 hour
272
284
273
-
SQL connections using Microsoft Entra authentication that remain active for more than 1 hour starts to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS and ADS. Tools that open new connection to execute queries aren't affected, like Synapse Studio.
285
+
SQL connections using Microsoft Entra authentication that remain active for more than 1 hour starts to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS (SQL Server Management Studio) and ADS (Azure Data Studio). Tools that open new connection to execute queries aren't affected, like Synapse Studio.
0 commit comments