You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -613,15 +613,9 @@ The easiest way is to grant yourself `Storage Blob Data Contributor` role on the
613
613
614
614
### Cannot find value of partitioning column in file
615
615
616
-
Delta Lake data sets may have `NULL` values in the partitioning columns. These partitions are stored in `HIVE_DEFAULT_PARTITION` folder. This is currently not supported in serverless SQL pool. In this case you will get the error that looks like:
617
-
618
-
```
619
-
Resolving Delta logs on path 'https://....core.windows.net/.../' failed with error:
620
-
Cannot find value of partitioning column '<column name>' in file
**Workaround:** Try to update your Delta Lake data set using Apache Spark pools and use some value (empty string or `"null"`) instead of `null` in the partitioning column.
618
+
**Release**: November 2021
625
619
626
620
### JSON text is not properly formatted
627
621
@@ -634,7 +628,7 @@ Msg 16513, Level 16, State 0, Line 1
634
628
Error reading external metadata.
635
629
```
636
630
First, make sure that your Delta Lake data set is not corrupted.
637
-
- Verify that you can read the content of the Delta Lake folder using Apache Spark pool in Azure Synapse or Databricks cluster. This way you will ensure that the `_delta_log` file is not corrupted.
631
+
- Verify that you can read the content of the Delta Lake folder using Apache Spark pool in Azure Synapse. This way you will ensure that the `_delta_log` file is not corrupted.
638
632
- Verify that you can read the content of data files by specifying `FORMAT='PARQUET'` and using recursive wildcard `/**` at the end of the URI path. If you can read all Parquet files, the issue is in `_delta_log` transaction log folder.
639
633
640
634
**Workaround** - try to create a checkpoint on Delta Lake data set using Apache Spark pool and re-run the query. The checkpoint will aggregate transactional json log files and might solve the issue.
@@ -650,13 +644,9 @@ Azure team will investigate the content of the `delta_log` file and provide more
650
644
651
645
### Resolving delta log on path ... failed with error: Cannot parse JSON object from log file
652
646
653
-
This error might happen due to the following reasons/unsupported features:
654
-
-[BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters) on Delta Lake dataset. Serverless SQL pools in Azure Synapse Analytics do not support datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters).
655
-
- Float column in Delta Lake data set with statistics.
656
-
- Data set partitioned on a float column.
647
+
**Status**: Resolved
657
648
658
-
**Workaround**: [Remove BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters#drop-a-bloom-filter-index) if you want to read Delta Lake folder using the serverless SQL pool.
659
-
If you have `float` columns that are causing the issue, you would need to re-partition the data set or remove the statistics.
0 commit comments