Skip to content

Commit 4129aaf

Browse files
Merge pull request #243028 from kevinjaku/patch-1
Update known-issues.md
2 parents c48e04b + 9b5c761 commit 4129aaf

File tree

1 file changed

+10
-17
lines changed

1 file changed

+10
-17
lines changed

articles/synapse-analytics/known-issues.md

Lines changed: 10 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
2828
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround|
2929
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has Workaround|
3030
|Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has Workaround|
31-
|Azure Synapse Apache Spark pool|[Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines](#failed-to-write-to-sql-dedicated-pool-from-synapse-spark-using-azure-synapse-dedicated-sql-pool-connector-for-apache-spark-when-using-notebooks-in-pipelines)|Has Workaround|
3231

3332
## Azure Synapse Analytics serverless SQL pool active known issues summary
3433

@@ -96,27 +95,12 @@ Deleting a Synapse workspace fails with the error message:
9695

9796
**Workaround**: The problem can be mitigated by retrying the delete operation. The engineering team is aware of this behavior and working on a fix.
9897

99-
## Azure Synapse Analytics Apache Spark pool active known issues summary
100-
101-
The following are known issues with the Synapse Spark.
102-
103-
### Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines
104-
105-
While using Azure Synapse Dedicated SQL Pool Connector for Apache Spark to write Azure Synapse Dedicated pool using Notebooks in pipelines, we would see an error message:
106-
107-
`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: COPY statement input file schema discovery failed: Cannot bulk load. The file does not exist or you don't have file access rights.`
108-
109-
**Workaround**: The engineering team is currently aware of this behavior and working on a fix. Following steps can be followed to work around the problem.
110-
- Set spark config through notebook:
111-
<br/>`spark.conf.set("spark.synapse.runAsMsi", "true")`
112-
- Or set spark config at [pool level](spark/apache-spark-azure-create-spark-configuration.md#create-an-apache-spark-configuration).
113-
114-
11598
## Recently Closed Known issues
11699

117100
|Synapse Component|Issue|Status|Date Resolved
118101
|---------|---------|---------|---------|
119102
|Azure Synapse serverless SQL pool|[Query failures while reading Cosmos DB data using OPENROWSET](#query-failures-while-reading-azure-cosmos-db-data-using-openrowset)|Resolved|March 2023
103+
|Azure Synapse Apache Spark pool|[Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines](#failed-to-write-to-sql-dedicated-pool-from-synapse-spark-using-azure-synapse-dedicated-sql-pool-connector-for-apache-spark-when-using-notebooks-in-pipelines)|Resolved|June 2023
120104

121105
## Azure Synapse Analytics serverless SQL pool recently closed known issues summary
122106

@@ -128,6 +112,15 @@ Queries from serverless SQL pool to Cosmos DB Analytical Store using OPENROWSET
128112

129113
**Status**: Resolved
130114

115+
## Azure Synapse Analytics Apache Spark pool active known issues summary
116+
117+
### Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines
118+
119+
While using Azure Synapse Dedicated SQL Pool Connector for Apache Spark to write Azure Synapse Dedicated pool using Notebooks in pipelines, we would see an error message:
120+
121+
`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: COPY statement input file schema discovery failed: Cannot bulk load. The file does not exist or you don't have file access rights.`
122+
123+
**Status**: Resolved
131124

132125
## Next steps
133126

0 commit comments

Comments
 (0)