v0.1.1
- Added batched iteration for
INSERT INTOqueries inStatementExecutionBackendwith defaultmax_records_per_batch=1000(#237). - Added crawler for mount points (#209).
- Added crawlers for compatibility of jobs and clusters, along with basic recommendations for external locations (#244).
- Added safe return on grants (#246).
- Added ability to specify empty group filter in the installer script (#216) (#217).
- Added ability to install application by multiple different users on the same workspace (#235).
- Added dashboard creation on installation and a requirement for
warehouse_idin config, so that the assessment dashboards are refreshed automatically after job runs (#214). - Added reliance on rate limiting from Databricks SDK for listing workspace (#258).
- Fixed errors in corner cases where Azure Service Principal Credentials were not available in Spark context (#254).
- Fixed
DESCRIBE TABLEthrowing errors when listing Legacy Table ACLs (#238). - Fixed
file already existserror in the installer script (#219) (#222). - Fixed
guess_external_locationsfailure withAttributeError: as_dictand added an integration test (#259). - Fixed error handling edge cases in
crawl_tablestask (#243) (#251). - Fixed
crawl_permissionstask failure on folder names containing a forward slash (#234). - Improved
READMEnotebook documentation (#260, #228, #252, #223, #225). - Removed redundant
.python-versionfile (#221). - Removed discovery of account groups from
crawl_permissionstask (#240). - Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0 (#245).
Kudos to @larsgeorge-db @william-conti @dmoore247 @tamilselvanveeramani @nfx @FastLee