v0.33.0
- Added
validate-table-locationscommand for checking overlapping tables across workspaces (#2341). A new command,validate-table-locations, has been added to check for overlapping table locations across workspaces before migrating tables. This command is intended to ensure that tables can be migrated across workspaces without issues. The new command is part of the table migration workflows and uses aLocationTriedata structure to efficiently search for overlapping table locations. If any overlaps are found, the command logs a warning message and adds the conflicting tables to a list of all conflicts. This list is returned at the end of the command. Thevalidate-table-locationscommand is intended to be run before migrating tables to ensure that the tables can be migrated without conflicts. The command includes aworkspace-idsflag, which allows users to specify a list of workspace IDs to include in the validation. If this flag is not provided, the command will include all workspaces present in the account. This new command resolves issue #673. Thevalidate_table_locationsmethod is added to theAccountAggregateclass and theExternalLocationsclass has been updated to use the newLocationTrieclass. The import section has also been updated to include new modules such asLocationTrieandTablefromdatabricks.labs.ucx.hive_metastore.locationsanddatabricks.labs.ucx.hive_metastore.tablesrespectively. Additionally, test cases have been added to ensure the correct functioning of theLocationTrieclass. - Added references to hive_metastore catalog in all table references an… (#2419). In this release, we have updated various methods and functions across multiple files to include explicit references to the
hive_metastorecatalog in table references. This change aims to improve the accuracy and consistency of table references in the codebase, enhancing reliability and maintainability. Affected files includeazure.py,init_scripts.py,pipelines.py, and others in thedatabricks/labs/ucx/assessmentmodule, as well as test files in thetests/unit/assessmentandtests/unit/azuredirectories. The_try_fetchmethod has been updated to include the catalog name in table references in all instances, ensuring the correct catalog is referenced in all queries. Additionally, various test functions in affected files have been updated to reference thehive_metastorecatalog in SQL queries. This update is part of the resolution of issue #2207 and promotes robust handling of catalog, schema, and table naming scenarios in hive metastore migration status management. - Added support for skipping views when migrating tables and views (#2343). In this release, we've added support for skipping both tables and views during the migration process in the
databricks labs ucxcommand, addressing issue #1937. Theskipcommand has been enhanced to support skipping views, and new functionsskip_table_or_viewandload_onehave been introduced to theTableclass. Appropriate error handling and tests, including unit tests and integration tests, have been implemented to ensure the functionality works as expected. With these changes, users can now skip views during migration and have more flexibility when working with tables in the Unity Catalog. - Avoid false positives when linting for pyspark patterns (#2381). This release includes enhancements to the PySpark linter aimed at reducing false positives during linting. The linter has been updated to check the originating module when detecting PySpark calls, ensuring that warnings are triggered only for relevant nodes from the pyspark or dbutils modules. Specifically, the
ReturnValueMatcherandDirectFilesystemAccessMatcherclasses have been modified to include this new check. These changes improve the overall accuracy of the PySpark linter, ensuring that only pertinent warnings are surfaced during linting. Additionally, the commit includes updated unit tests to verify the correct behavior of the modified linter. Specific improvements have been made to avoid false positives when detecting thelistTablesfunction in the PySpark catalog, ensuring that the warning is only triggered for the actual PySparklistTablesmethod call. - Bug: Generate custom warning when doing table size check and encountering DELTA_INVALID_FORMAT exception (#2426). A modification has been implemented in the
_safe_get_table_sizemethod within thetable_size.pyfile of thehive_metastorepackage. This change addresses an issue (#1913) concerning the occurrence of aDELTA_INVALID_FORMATexception while determining the size of a Delta table. Instead of raising an error, the exception is now converted into a warning, and the function proceeds to process the rest of the table. A corresponding warning message has been added to inform users about the issue and suggest checking the table structure. No new methods have been introduced, and existing functionalities have been updated to handle this specific exception more gracefully. The changes have been thoroughly tested with unit tests for the table size check when encountering aDELTA_INVALID_FORMATerror, employing a mock backend and a mock Spark session to simulate the error conversion. This change does not affect user documentation, CLI commands, workflows, or tables, and is solely intended for software engineers adopting the project. - Clean up left over uber principal resources for Azure (#2370). This commit includes modifications to the Azure access module of the UCX project to clean up resources if the creation of the uber principal fails midway. It addresses issues #2360 (Azure part) and #2363, and modifies the command
databricks labs ucx create-uber-principalto include this functionality. The changes include adding new methods and modifying existing ones for working with Azure resources, such asStorageAccount,AccessConnector, andAzureRoleAssignment. Additionally, new unit and integration tests have been added and manually tested to ensure that the changes work as intended. The commit also includes new fixtures for testing storage accounts and access connectors, and a test case for getting, applying, and deleting storage permissions. Theazure_api_clientfunction has been updated to handle different input argument lengths and methods such as "get", "put", and "post". A new managed identity, "appIduser1", has been added to the Azure mappings file, and the corresponding role assignments have been updated. The changes include error handling mechanisms for certain scenarios that may arise during the creation of the uber service principal. - Crawlers: Use
TRUNCATE TABLEinstead ofDELETE FROMwhen resetting crawler tables (#2392). In this release, the.reset()method for crawlers has been updated to useTRUNCATE TABLEinstead ofDELETE FROMwhen clearing out crawler tables, resulting in more efficient and idiomatic code. This change affects the existingmigrate-data-reconciliationworkflow and is accompanied by updated unit and integration tests to ensure correct functionality. Thereset()method now accepts a table name argument, which is passed to the newly introducedescape_sql_identifier()utility function from thedatabricks.labs.ucx.framework.utilsmodule for added safety. The migration status is now refreshed using theTRUNCATE TABLEcommand, which removes all records from the table, providing improved performance compared to the previous implementation. TheSHOW DATABASESandTRUNCATE TABLEqueries are validated in therefresh_migration_statusworkflow test, which now checks if theTRUNCATE TABLEquery is used instead ofDELETE FROMwhen resetting crawler tables. - Detect tables that are not present in the mapping file (#2205). In this release, we have introduced a new method
get_remaining_tables()that returns a list of tables in the Hive metastore that have not been processed by the migration tool. This method performs a full refresh of the index and checks each table in the Hive metastore against the index to determine if it has been migrated. We have also added a new private method_is_migrated()to check if a given table has already been migrated. Additionally, we have replaced therefresh_migration_statusmethod withupdate_migration_statusin several workflows to present a more accurate representation of the migration process in the dashboard. A new SQL script, 04_1_remaining_hms_tables.sql, has been added to list the remaining tables in Hive Metastore which are not present in the mapping file. We have also added a new test for the table migration job that verifies that tables not present in the mapping file are detected and reported. A new test functiontest_refresh_migration_status_published_remained_tableshas been added to ensure that the migration process correctly handles the case where tables have been published to the target metadata store but still remain in the source metadata store. These changes are intended to improve the functionality of the migration tool for Hive metastore tables and resolve issue #1221. - Fixed ConcurrentDeleteReadException in migrate-view task during table migration (#2282). In this release, we have implemented a fix for the ConcurrentDeleteReadException that occurred during the
migrate-viewcommand's migration task. The solution involved moving the refreshing of migration status between batches from within the batches. Along with the fix, we added a new methodindex()to theTableMigrateclass, which checks if a table has been migrated or not. This method is utilized in the_view_can_be_migratedmethod to ensure that all dependencies of a view have been migrated before migrating the view. Theindex_full_refresh()method, which earlier performed this check, has been modified to refresh the index between batches instead of within batches. It is worth noting that the changes made have been manually tested, but no unit tests, integration tests, or verification on staging environments have been added. The target audience for this release is software engineers who adopt this project. No new documentation, commands, workflows, or tables have been added or modified in this release. - Fixed documentation typos:
create-missing-pricipals->create-missing-principals(#2357). This pull request resolves typographical errors in thecreate-missing-principalscommand documentation, correcting the mistaken usage ofcreate-missing-pricipalsthroughout the project documentation. The changes encompass the command description, the UCX command section, and the manual process documentation for AWS storage credentials. Thecreate-missing-principalscommand, utilized by Cloud Admins to create and configure new AWS roles for Unity Catalog access, remains functionally unaltered. - Fixed linting for Spark Python workflow tasks (#2349). This commit updates the linter to support PySpark tasks in workflows by modifying the existing
experimental-workflow-linterto correctly handle these tasks. Previously, the linter assumed Python files were Jupyter notebooks, but PySpark tasks are top-level Python files run as__main__. This change introduces a newImportFileResolverclass to resolve imports for PySpark tasks, and updates to theDependencyResolverclass to properly handle them. Additionally, unit and integration tests have been updated and added to ensure the correct behavior of the linter. TheDependencyResolverconstructor now accepts an additionalimport_resolverargument in some instances. This commit resolves issue #2213 and improves the accuracy and versatility of the linter for different types of Python files. - Fixed missing
security_policywhen updating SQL warehouse config (#2409). In this release, we have added new methodsGetWorkspaceWarehouseConfigResponseSecurityPolicyandSetWorkspaceWarehouseConfigRequestSecurityPolicyto improve handling of SQL warehouse config security policy. We have introduced a new variablesecurity_policyto store the security policy value, which is used when updating the SQL warehouse configuration, ensuring that the required security policy is set and fixing theInvalidParameterValue: Endpoint security policy is required and must be one of NONE, DATA_ACCESS_CONTROL, PASSTHROUGHerror. Additionally, when theenable_serverless_computeerror occurs, the new SQL warehouse data access config is printed in the log, allowing users to manually configure the uber principal in the UI. We have also updated thecreate_uber_principalmethod to set the security policy correctly and added parameterized tests to test the setting of the warehouse configuration security policy. Thetest_create_global_spnmethod has been updated to include thesecurity_policyparameter in thecreate_global_spnmethod call, and new test cases have been added to verify the warehouse config's security policy is correctly updated. These enhancements will help make the system more robust and user-friendly. - Fixed raise logs
ResourceDoesNotExistswhen iterating the log paths (#2382). In this commit, we have improved the handling of theResourceDoesNotExistexception when iterating through log paths in the open-source library. Previously, the exception was not being properly raised or handled, resulting in unreliable code behavior. To address this, we have added unit tests in thetest_install.pyfile that accurately reflect the actual behavior when iterating log paths. We have also modified the test to raise theResourceDoesNotExistexception when the result is iterated over, rather than when the method is called. Additionally, we have introduced theResourceDoesNotExistIterclass to make it easier to simulate the error during testing. These changes ensure that the code can gracefully handle cases where the specified log path does not exist, improving the overall reliability and robustness of the library. Co-authored by Andrew Snare. - Generate custom error during installation due to external metastore connectivity issues (#2425). In this release, we have added a new custom error
OperationFailedto theInstallUcxErrorenumeration in thedatabricks/labs/ucx/install.pyfile. This change is accompanied by an exception handler that checks if the error message contains a specific string related to AWS credentials, indicating an issue with external metastore connectivity. If this condition is met, a newOperationFailederror is raised with a custom message, providing instructions for resolving the external metastore connectivity issue and re-running the UCX installation. This enhancement aims to provide a more user-friendly error message for users encountering issues during UCX installation due to external metastore connectivity. The functionality of the_create_databasemethod has been modified to include this new error handling mechanism, with no alterations made to the existing functionality. Although the changes have not been tested using unit tests, integration tests, or staging environments, they have been manually tested on a workspace with incorrect external metastore connectivity. - Improve logging when waiting for workflows to complete (#2364). This pull request enhances the logging and error handling of the databricks labs ucx project, specifically when executing workflows. It corrects a bug where
skip_job_waitwas not handled correctly, and now allows for a specified timeout period instead of the default 20 minutes. Job logs are replicated into the local logfile if a workflow times out. The existingdatabricks labs ucx ensure-assessment-runanddatabricks labs ucx migrate-tablescommands have been updated, and unit and integration tests have been added. These improvements provide more detailed and informative logs, and ensure the quality of the code through added tests. The behavior of thewait_get_run_job_terminated_or_skippedmethod has been modified in the tests, and theassign_metastorefunction has also been updated, but the specific changes are not specified in the commit message. - Lint dependencies consistently (#2400). In this release, the
files.pymodule in thedatabricks/labs/ucx/source_code/linterspackage has been updated to ensure consistent linting of jobs. Previously, the linting sequence was not consistent and did not provide inherited context when linting jobs. These issues have been resolved by modifying the_lint_onefunction to build an inherited tree when linting files or notebooks, and by adding aFileLinterobject to determine which file/notebook linter to use for linting. The_lint_taskfunction has also been updated to yield aLocatedAdviceobject instead of a tuple, and takes an additional argument,linted_paths, which is a set of paths that have been previously linted. Additionally, the_lint_notebookand_lint_filefunctions have been removed as their functionality is now encompassed by the_lint_onefunction. These changes ensure consistent linting of jobs and inherited context. Unit tests were run and passed. Co-authored by Eric Vergnaud. - Make Lakeview names unique (#2354). A change has been implemented to guarantee the uniqueness of dataset names in Lakeview dashboards, addressing issue #2345 where non-unique names caused system errors. This change includes renaming the
countdataset tofourtytwoin thedatasetsfield and updating thenamefield for a widget query fromcountto 'counter'. These internal adjustments streamline the Lakeview system's functionality while ensuring consistent and unique dataset naming. - Optimisation: when detecting if a file is a notebook only read the start instead of the whole file (#2390). In this release, we have optimized the detection of notebook files during linting in a non-Workspace path in our open-source library. This has been achieved by modifying the
is_a_notebookfunction in thebase.pyfile to improve efficiency. The function now checks the start of the file instead of loading the entire file, which is more resource-intensive. If the content of the file is not available, it will attempt to read the file header when opening the file, instead of returning False if there's an error. Themagic_headeris used to determine if the file is a notebook or not. If the content is available, the function will check if it starts with themagic_headerto determine if a file is a notebook or not, without having to read the entire file, which can be time-consuming and resource-intensive. This change will improve the linting performance and reduce resource usage, making the library more efficient and user-friendly for software engineers. - Retry dashboard install on DeadlineExceeded (#2379). In this release, we have added the DeadlineExceeded exception to the @Retried decorator in the _create_dashboard method, which is used to create a lakeview dashboard from SQL queries in a folder. This modification to the existing functionality is intended to improve the reliability of dashboard installation by retrying the operation up to four minutes in case of a DeadlineExceeded error. This change resolves issues #2376, #2377, and #2389, which were related to dashboard installation timeouts. Software engineers will benefit from this update as it will ensure successful installation of dashboards, even in scenarios where timeouts were previously encountered.
- Updated databricks-labs-lsql requirement from <0.8,>=0.5 to >=0.5,<0.9 (#2416). In this update, we have modified the version requirement for the
databricks-labs-lsqllibrary to allow version 0.8, which includes bug fixes and improvements in dashboard creation and deployment. We have changed the version constraint from '<0.8,>=0.5' to '>=0.5,<0.9' to accommodate the latest version while preventing future major version upgrades. This change enhances the overall design of the system and simplifies the code for managing dashboard deployment. Additionally, we have introduced a new test that verifies thedeploy_dashboardmethod is no longer being used, utilizing thedeprecated_callfunction from pytest to ensure that calling the method raises a deprecation warning. This ensures that the system remains maintainable and up-to-date with the latest version of thedatabricks-labs-lsqllibrary. - Updated ext hms detection to include more conf attributes (#2414). In this enhancement, the code for installing UCX has been updated to include additional configuration attributes for detecting external Hive Metastore (HMS). The previous implementation failed to recognize specific attributes like "spark.hadoop.hive.metastore.uris". This update introduces a new list of recognized prefixes for external HMS spark attributes, namely "spark_conf.spark.sql.hive.metastore", "spark_conf.spark.hadoop.hive.metastore", "spark_conf.spark.hadoop.javax.jdo.option", and "spark_conf.spark.databricks.hive.metastore". This change enables the extraction of a broader set of configuration attributes during installation, thereby improving the overall detection and configuration of external HMS attributes.
- Updated sqlglot requirement from <25.11,>=25.5.0 to >=25.5.0,<25.12 (#2415). In this pull request, we have updated the
sqlglotdependency to a new version range,>=25.5.0,<25.12, to allow for the latest version ofsqlglotto be used while ensuring that the version does not exceed 25.12. This change includes several breaking changes, new features, and bug fixes. New features include support for ALTER VIEW AS SELECT, UNLOAD, and other SQL commands, as well as improvements in performance. Bug fixes include resolutions for issues related to OUTER/CROSS APPLY parsing, GENERATE_TIMESTAMP_ARRAY, and other features. Additionally, there are changes that improve the handling of various SQL dialects, including BigQuery, DuckDB, Snowflake, Oracle, and ClickHouse. The changelog and commits also reveal fixes for bugs and improvements in the library since the last version used in the project. By updating thesqlglotdependency to the new version range, the project can leverage the newly added features, bug fixes, and performance improvements while ensuring that the version does not exceed 25.12. - Updated sqlglot requirement from <25.9,>=25.5.0 to >=25.5.0,<25.11 (#2403). In this release, we have updated the required version range for the
sqlglotlibrary from '[25.5.0, 25.9)' to '[25.5.0, 25.11)'. This update allows us to utilize newer versions ofsqlglotwhile maintaining compatibility with previous versions. Thesqlglotteam's latest release, version 25.10.0, includes several breaking changes, new features, bug fixes, and refactors. Notable changes include support for STREAMING tables in Databricks, transpilation of Snowflake's CONVERT_TIMEZONE function for DuckDB, support for GENERATE_TIMESTAMP_ARRAY in BigQuery, and the ability to parse RENAME TABLE as a Command in Teradata. Due to these updates, we strongly advise conducting thorough testing before deploying to production, as these changes may impact the functionality of the project. - Use
load_tableinstead of deprecatedis_viewin failing integration testtest_mapping_skips_tables_databases(#2412). In this release, theis_viewparameter of theskip_table_or_viewmethod in thetest_mapping_skips_tables_databasesintegration test has been deprecated and replaced with a more flexibleload_tableparameter. This change allows for greater control and customization when specifying how a table should be loaded during the test. Theload_tableparameter is a callable that returns aTableobject, which contains information about the table's schema, name, object type, and table format. This improvement removes the use of a deprecated parameter, enhancing the maintainability of the test code.
Dependency updates:
- Updated sqlglot requirement from <25.9,>=25.5.0 to >=25.5.0,<25.11 (#2403).
- Updated databricks-labs-lsql requirement from <0.8,>=0.5 to >=0.5,<0.9 (#2416).
- Updated sqlglot requirement from <25.11,>=25.5.0 to >=25.5.0,<25.12 (#2415).
Contributors: @asnare, @JCZuurmond, @ericvergnaud, @pritishpai, @dependabot[bot], @FastLee, @HariGS-DB, @qziyuan, @aminmovahed-db