Skip to content

v0.1.0

Choose a tag to compare

@nfx nfx released this 18 Sep 19:16
· 1675 commits to main since this release
c6019ad

Version changelog

0.1.0

Features

  • Added interactive installation wizard (#184, #117).
  • Added schedule of jobs as part of install.sh flow and created some documentation (#187).
  • Added debug notebook companion to troubleshoot the installation (#191).
  • Added support for Hive Metastore Table ACLs inventory from all databases (#78, #122, #151).
  • Created $inventory.tables from Scala notebook (#207).
  • Added local group migration support for ML-related objects (#56).
  • Added local group migration support for SQL warehouses (#57).
  • Added local group migration support for all compute-related resources (#53).
  • Added local group migration support for security-related objects (#58).
  • Added local group migration support for workflows (#54).
  • Added local group migration support for workspace-level objects (#59).
  • Added local group migration support for dashboards, queries, and alerts (#144).

Stability

  • Added codecov.io publishing (#204).
  • Added more tests to group.py (#148).
  • Added tests for group state (#133).
  • Added tests for inventorizer and typed (#125).
  • Added tests WorkspaceListing (#110).
  • Added make_*_permissions fixtures (#159).
  • Added reusable fixtures module (#119).
  • Added testing for permissions (#126).
  • Added inventory table manager tests (#153).
  • Added product_info to track as SDK integration (#76).
  • Added failsafe permission get operations (#65).
  • Always install the latest pip version in ./install.sh (#201).
  • Always store inventory in hive_metastore and make only inventory_database configurable (#178).
  • Changed default logging level from TRACE to DEBUG log level (#124).
  • Consistently use WorkspaceClient from databricks.sdk (#120).
  • Convert pipeline code to use fixtures. (#166).
  • Exclude mixins from coverage (#130).
  • Fixed codecov.io reporting (#212).
  • Fixed configuration path in job task install code (#210).
  • Fixed a bug with dependency definitions (#70).
  • Fixed failing test_jobs (#140).
  • Fixed the issues with experiment listing (#64).
  • Fixed integration testing configuration (#77).
  • Make project runnable on nightly testing infrastructure (#75).
  • Migrated cluster policies to new fixtures (#174).
  • Migrated clusters to the new fixture framework (#162).
  • Migrated instance pool to the new fixture framework (#161).
  • Migrated to databricks.labs.ucx package (#90).
  • Migrated token authorization to new fixtures (#175).
  • Migrated experiment fixture to standard one (#168).
  • Migrated jobs test to fixture based one. (#167).
  • Migrated model fixture to the standard fixtures (#169).
  • Migrated warehouse fixture to standard one (#170).
  • Organise modules by domain (#197).
  • Prefetch all account-level and workspace-level groups (#192).
  • Programmatically create a dashboard (#121).
  • Properly integrate Python logging facility (#118).
  • Refactored code to use Databricks SDK for Python (#27).
  • Refactored configuration and remove global provider state (#71).
  • Removed pydantic dependency (#138).
  • Removed redundant pyspark, databricks-connect, delta-spark, and pandas dependencies (#193).
  • Removed redundant typer[all] dependency and its usages (#194).
  • Renamed MigrationGroupsProvider to GroupMigrationState (#81).
  • Replaced ratelimit and tenacity dependencies with simpler implementations (#195).
  • Reorganised integration tests to align more with unit tests (#206).
  • Run build workflow also on main branch (#211).
  • Run integration test with a single group (#152).
  • Simplify SqlBackend and table creation logic (#203).
  • Updated migration_config.yml (#179).
  • Updated legal information (#196).
  • Use make_secret_scope fixture (#163).
  • Use fixture factory for make_table, make_schema, and make_catalog (#189).
  • Use new fixtures for notebooks and folders (#176).
  • Validate toolkit notebook test (#183).

Contributing

  • Added a note on external dependencies (#139).
  • Added ability to run SQL queries on Spark when in Databricks Runtime (#108).
  • Added some ground rules for contributing (#82).
  • Added contributing instructions link from main readme (#109).
  • Added info about environment refreshes (#155).
  • Clarified documentation (#137).
  • Enabled merge queue (#146).
  • Improved CONTRIBUTING.md guide (#135, #145).

Kudos to @dependabot @nsenno-dbr @renardeinside @nfx @william-conti @larsgeorge-db @HariGS-DB @saraivdbx