Skip to content

Commit a3a7f2f

Browse files
Merge pull request #209759 from jovanpop-msft/patch-236
Synchronized delta table known issue
2 parents e57f792 + d773f1d commit a3a7f2f

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ The error "Invalid object name 'table name'" indicates that you're using an obje
178178
- If you don't see the object, maybe you're trying to query a table from a lake or Spark database. The table might not be available in the serverless SQL pool because:
179179

180180
- The table has some column types that can't be represented in serverless SQL pool.
181-
- The table has a format that isn't supported in serverless SQL pool. Examples are Delta or ORC.
181+
- The table has a format that isn't supported in serverless SQL pool. Examples are Avro or ORC.
182182

183183
### Unclosed quotation mark after the character string
184184

@@ -854,7 +854,7 @@ There are some limitations and known issues that you might see in Delta Lake sup
854854
- Make sure that you're referencing the root Delta Lake folder in the [OPENROWSET](./develop-openrowset.md) function or external table location.
855855
- The root folder must have a subfolder named `_delta_log`. The query fails if there's no `_delta_log` folder. If you don't see that folder, you're referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) by using Apache Spark pools.
856856
- Don't specify wildcards to describe the partition schema. The Delta Lake query automatically identifies the Delta Lake partitions.
857-
- Delta Lake tables created in the Apache Spark pools aren't automatically available in serverless SQL pool. To query such Delta Lake tables by using the T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as the format.
857+
- Delta Lake tables created in the Apache Spark pools are automatically available in serverless SQL pool, but the schema is not updated. If you add the columns in hte Delta table using Spark pool, the changes will not be shown in serverless database.
858858
- External tables don't support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on the Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.
859859
- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
860860
- Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
@@ -901,6 +901,12 @@ There are two options available to circumvent this error:
901901

902902
Our engineering team is currently working on a full support for Spark 3.3.
903903

904+
### Delta tables in Lake databases do not have identical schema in Spark and serverless pools
905+
906+
Serverless SQL pools enable you to access Parquet, CSV, and Delta tables that are created in Lake database using Spark or Synapse designer. Accessing the Delta tables is still in public preview, and currently serverless will synchronize a Delta table with Spark at the time of creation but will not update the schema if the columns are added later using the `ALTER TABLE` statement in Spark.
907+
908+
This is a public preview limitation. Drop and re-create the Delta table in Spark (if it is possible) instead of altering tables to resolve this issue.
909+
904910
## Performance
905911

906912
Serverless SQL pool assigns the resources to the queries based on the size of the dataset and query complexity. You can't change or limit the resources that are provided to the queries. There are some cases where you might experience unexpected query performance degradations and you might have to identify the root causes.

0 commit comments

Comments
 (0)