v0.39.0
- Added
Farama-Notificationsto known list (#2822). A new configuration has been implemented in this release to integrate Farama-Notifications into the existing system, partially addressing issue #193 - Added
aiohttp-corslibrary to known list (#2775). In this release, we have added theaiohttp-corslibrary to our project, providing asynchronous Cross-Origin Resource Sharing (CORS) handling for theaiohttplibrary. This addition enhances the robustness and flexibility of CORS management in our relevant projects. The library includes several new modules such as "aiohttp_cors", "aiohttp_cors.abc", "aiohttp_cors.cors_config", "aiohttp_cors.mixin", "aiohttp_cors.preflight_handler", "aiohttp_cors.resource_options", and "aiohttp_cors.urldispatcher_router_adapter", which offer functionalities for configuring and handling CORS inaiohttpapplications. This change partially resolves issue #1931 and further strengthens our application's security and cross-origin resource sharing capabilities. - Added
category-encoderslibrary to known list (#2781). In this release, we've added thecategory-encoderslibrary to our supported libraries, which provides a variety of methods for encoding categorical variables as numerical data, including one-hot encoding and target encoding. This addition resolves part of issue #1931, which concerned the support of this library. The library has been integrated into our system by adding a new entry forcategory-encodersin the known.json file, which contains several modules and classes corresponding to various encoding methods provided by the library. This enhancement enables software engineers to leverage the capabilities ofcategory-encoderslibrary to encode categorical variables more efficiently and effectively. - Added
cmdstanpyto known list (#2786). In this release, we have addedcmdstanpyandstaniolibraries to our codebase.cmdstanpyis a Python library for interfacing with the Stan probabilistic programming language and has been added to the whitelist. This addition enables the use ofcmdstanpy's functionalities, including loading, inspecting, and manipulating Stan model objects, as well as running MCMC simulations. Additionally, we have included thestaniolibrary, which provides functionality for reading and writing Stan data and model files. These additions enhance the codebase's capabilities for working with probabilistic models, offering expanded options for loading, manipulating, and simulating models written in Stan. - Added
confectionlibrary to known list (#2787). In this release, theconfectionlibrary, a lightweight, pure Python library for parsing and formatting cookies with two modules for working with cookie headers and utility functions, has been added to the known list of libraries and is now usable within the project. Additionally, several modules from thesrslylibrary, a collection of serialization utilities for Python including support for JSON, MessagePack, cloudpickle, and Ruamel YAML, have been added to the known list of libraries, increasing the project's flexibility and functionality in handling serialized data. This partially resolves issue #1931. - Added
configparserlibrary to known list (#2796). In this release, we have added support for theconfigparserlibrary, addressing issue #1931.Configparseris a standard Python library used for parsing configuration files. This change not only whitelists the library but also includes the "backports.configparser" and "backports.configparser.compat" modules, providing backward compatibility for older versions of Python. By recognizing and supporting theconfigparserlibrary, users can now utilize it in their code with confidence, knowing that it is a known and supported library. This update also ensures that the backports for older Python versions are recognized, enabling users to leverage the library seamlessly, regardless of the Python version they are using. - Added
diskcachelibrary to known list (#2790). A new update has been made to include thediskcachelibrary in our open-source library's known list, as detailed in the release notes. This addition brings in multiple modules, includingdiskcache,diskcache.cli,diskcache.core,diskcache.djangocache,diskcache.persistent, anddiskcache.recipes. Thediskcachelibrary is a high-performance caching system, useful for a variety of purposes such as caching database queries, API responses, or any large data that needs frequent access. By adding thediskcachelibrary to the known list, developers can now leverage its capabilities in their projects, partially addressing issue #1931. - Added
dm-treelibrary to known list (#2789). In this release, we have added thedm-treelibrary to our project's known list, enabling its integration and use within our software. Thedm-treelibrary is a C++ API that provides functionalities for creating and manipulating tree data structures, with support for sequences and tree benchmarking. This addition expands our range of available data structures, addressing the lack of support for tree data structures and partially resolving issue #1931, which may have been related to the integration of thedm-treelibrary. By incorporating this library, we aim to enhance our project's performance and versatility, providing software engineers with more options for handling tree data structures. - Added
evaluateto known list (#2821). In this release, we have added theevaluatepackage and its dependent libraries to our open-source library. Theevaluatepackage is a tool for evaluating and analyzing machine learning models, providing a consistent interface to various evaluation tasks. Its dependent libraries includecolorful,cmdstanpy,comm,eradicate,multiprocess, andxxhash. Thecolorfullibrary is used for colorizing terminal output, whilecmdstanpyprovides Python infrastructure for Stan, a platform for statistical modeling and high-performance statistical computation. Thecommlibrary is used for creating and managing IPython comms, anderadicateis used for removing unwanted columns from pandas DataFrame. Themultiprocesslibrary is used for spawning processes, andxxhashis used for the XXHash algorithms, which are used for fast hash computation. This addition partly resolves issue #1931, providing enhanced functionality for evaluating machine learning models. - Added
futureto known list (#2823). In this commit, we have added thefuturemodule, a compatibility layer for Python 2 and Python 3, to the project's known list in the configuration file. This module provides a wide range of backward-compatible tools and fixers to smooth over the differences between the two major versions of Python. It includes numerous sub-modules such as "future.backports", "future.builtins", "future.moves", and "future.standard_library", among others, which offer backward-compatible features for various parts of the Python standard library. The commit also includes related modules like "libfuturize", "libpasteurize", andpastand their respective sub-modules, which provide tools for automatically converting Python 2 code to Python 3 syntax. These additions enhance the project's compatibility with both Python 2 and Python 3, providing developers with an easier way to write cross-compatible code. By adding thefuturemodule and related tools, the project can take full advantage of the features and capabilities provided, simplifying the process of writing code that works on both versions of the language. - Added
google-api-coreto known list (#2824). In this commit, we have added thegoogle-api-coreandproto-pluspackages to our codebase. Thegoogle-api-corepackage brings in a collection of modules for low-level support of Google Cloud services, such as client options, gRPC helpers, and retry mechanisms. This addition enables access to a wide range of functionalities for interacting with Google Cloud services. Theproto-pluspackage includes protobuf-related modules, simplifying the handling and manipulation of protobuf messages. This package includes datetime helpers, enums, fields, marshaling utilities, message definitions, and more. These changes enhance the project's versatility, providing users with a more feature-rich environment for interacting with external services, such as those provided by Google Cloud. Users will benefit from the added functionality and convenience provided by these packages. - Added
google-auth-oauthliband dependent libraries to known list (#2825). In this release, we have added thegoogle-auth-oauthlibandrequests-oauthliblibraries and their dependencies to our repository to enhance OAuth2 authentication flow support. Thegoogle-auth-oauthliblibrary is utilized for Google's OAuth2 client authentication and authorization flows, whilerequests-oauthlibprovides OAuth1 and OAuth2 support for therequestslibrary. This change partially resolves the missing dependencies issue and improves the project's ability to handle OAuth2 authentication flows with Google and other providers. - Added
greenletto known list (#2830). In this release, we have added thegreenletlibrary to the known list in the configuration file, addressing part of issue #193 - Added
gymnasiumto known list (#2832). A new update has been made to include the popular open-sourcegymnasiumlibrary in the project's configuration file. The library provides various environments, spaces, and wrappers for developing and testing reinforcement learning algorithms, and includes modules such as "gymnasium.core", "gymnasium.envs", "gymnasium.envs.box2d", "gymnasium.envs.classic_control", "gymnasium.envs.mujoco", "gymnasium.envs.phys2d", "gymnasium.envs.registration", "gymnasium.envs.tabular", "gymnasium.envs.toy_text", "gymnasium.experimental", "gymnasium.logger", "gymnasium.spaces", and "gymnasium.utils", each with specific functionality. This addition enables developers to utilize the library without having to modify any existing code and take advantage of the latest features and bug fixes. This change partly addresses issue #1931, likely related to usinggymnasiumin the project, allowing developers to now use it for developing and testing reinforcement learning algorithms. - Added and populate UCX
workflow_runstable (#2754). In this release, we have added and populated a newworkflow_runstable in the UCX project to track the status of workflow runs and handle concurrent writes. This update resolves issue #2600 and is accompanied by modifications to themigration-process-experimentalworkflow, newWorkflowRunRecorderandProgressTrackingInstallationclasses, and updated user documentation. We have also added unit tests, integration tests, and arecord_workflow_runmethod in theMigrationWorkflowclass. The new table and methods have been tested to ensure they correctly record workflow run information. However, there are still some issues to address, such as deciding on getting workflow run status fromparse_log_task. - Added collection of used tables from Python notebooks and files and SQL queries (#2772). This commit introduces the collection and storage of table usage information as part of linting jobs to enable tracking of legacy table usage and lineage. The changes include the modification of existing workflows, addition of new tables and views, and the introduction of new classes such as
UsedTablesCrawler,LineageAtom, andTableInfoNode. The new classes and methods support tracking table usage and lineage in Python notebooks, files, and SQL queries. Unit tests and integration tests have been added and updated to ensure the correct functioning of this feature. This is the first pull request in a series of three, with the next two focusing on using the table information in queries and displaying results in the assessment dashboard. - Changed logic of direct filesystem access linting (#2766). This commit modifies the direct filesystem access (DFSA) linting logic to reduce false positives and improve precision. Previously, all string constants matching a DFSA pattern were detected, with false positives filtered on a case-by-case basis. The new approach narrows DFSA detection to instances originating from
sparkordbutilsmodules, ensuring relevance and minimizing false alarms. The commit introduces new methods, such as 'is_builtin()' and 'get_call_name()', to determine if a given node is a built-in or not. Additionally, it includes unit tests and updates to the test cases intest_directfs.pyto reflect the new detection criteria. This change enhances the linting process and enables developers to maintain better control over direct filesystem access within thesparkanddbutilsmodules. - Fixed integration issue when collecting tables (#2817). In this release, we have addressed integration issues related to table collection in the Databricks Labs UCX project. We have introduced a new
UsedTablesCrawlerclass to crawl tables in paths and queries, which resolves issues reported in tickets #2800 and #2808. Additionally, we have updated thedirectfs_access_crawler_for_pathsanddirectfs_access_crawler_for_queriesmethods to work with the newUsedTablesCrawlerclass. We have also made changes to theworkflow_lintermethod to include the newused_tables_crawler_for_pathsproperty. Furthermore, we have refactored thelintmethod of certain classes to acollect_tablesmethod, which returns an iterable ofUsedTableobjects to improve table collection. Thelintmethod now processes the collected tables and raises advisories as needed, while theapplymethod remains unchanged. Integration tests were executed as part of this commit. - Increase test coverage (#2818). In this update, we have expanded the test suite for the
Treeclass in our Python AST codebase with several new unit tests. These tests are designed to verify various behaviors, including checking forNonereturns, validating string truncation, ensuringNotImplementedErrorexceptions are raised during node appending and method calls, and testing the correct handling of global variables. Additionally, we have included tests that ensure a constant is not from a specific module. This enhancement signifies our dedication to improving test coverage and consistency, which will aid in maintaining code quality, detecting unintended side effects, and preventing regressions in future development efforts. - Strip preliminary comments in pip cells (#2763). In this release, we have addressed an issue in the processing of pip commands preceded by non-MAGIC comments, ensuring that pip-based library management in Databricks notebooks functions correctly. The changes include stripping preliminary comments and handling the case where the pip command is preceded by a single '%' or '!'. Additionally, a new unit test has been added to validate the behavior of a notebook containing a malformed pip cell. This test checks that the notebook can still be parsed and built into a dependency graph without issues, even in the presence of non-MAGIC comments preceding the pip install command. The code for the test is written in Python and uses the Notebook, Dependency, and DependencyGraph classes to parse the notebook and build the dependency graph. The overall functionality of the code remains unchanged, and the code now correctly processes pip commands in the presence of non-MAGIC comments.
- Temporarily ignore
MANAGEDHMS tables on external storage location (#2837). This release introduces changes to the behavior of the_migrate_external_tablemethod in thetable_migrate.pyfile, specifically for handling managed tables located on external storage. Previously, the method attempted to migrate any external table, but with this change, it now checks if the object type is 'MANAGED'. If it is, a warning message is logged, and the migration is skipped due to UCX's lack of support for migrating managed tables on external storage. This change affects the existing workflow, specifically the behavior of themigrate_dbfs_root_tablesfunction in the HMS table migration test suite. The function now checks for the absence of certain SQL queries, specifically those involvingSYNC TABLEandALTER TABLE, in thebackend.querieslist to ensure that queries related to managed tables on external storage locations are excluded. This release includes unit tests and integration tests to verify the changes and ensure proper behavior for the modified workflow. Issue #2838 has been resolved with this commit. - Updated sqlglot requirement from <25.23,>=25.5.0 to >=25.5.0,<25.25 (#2765). In this release, we have updated the sqlglot requirement in the pyproject.toml file to allow for any version greater than or equal to 25.5.0 but less than 25.25. This resolves a conflict in the previous requirement, which ranged from >=25.5.0 to <25.23. The update includes several bug fixes, refactors, and new features, such as support for the OVERLAY function in PostgreSQL and a flag to automatically exclude Keep diff nodes. Additionally, the check_deploy job has been simplified, and the supported dialect count has increased from 21 to 23. This update ensures that the project remains up-to-date and compatible with the latest version of sqlglot, while also improving functionality and stability.
- Whitelists catalogue library (#2780). In this release, we've implemented a change to whitelist the catalogue library, which partially addresses issue #193. This improvement allows for the reliable and secure use of the catalogue library in our open-source project. The whitelisting ensures that any potential security threats originating from this library are mitigated, enhancing the overall security of our software. This enhancement also promotes better code maintainability and readability, making it easier for software engineers to understand the library's role in the project. By addressing this issue, our library becomes more robust, dependable, and maintainable for both current and future developments.
- Whitelists circuitbreaker (#2783). A circuit breaker pattern has been implemented in the library to enhance fault tolerance and prevent cascading failures by introducing a delay before retrying requests to a failed service. This feature is configurable and allows users to specify which services should be protected by the circuit breaker pattern via a whitelist in the
known.jsonconfiguration file. A new entry forcircuitbreakeris added to the configuration, containing an empty list for the circuit breaker whitelist. This development partially addresses issue #1931, aimed at improving system resilience and fault tolerance, and is a significant stride towards building a more robust and reliable open-source library. - Whitelists cloudpathlib (#2784). In this release, we have whitelisted the cloudpathlib library by adding it to the known.json file. Cloudpathlib is a Python library for manipulating cloud paths, and includes several modules for interacting with various cloud storage systems. Each module has been added to the known.json file with an empty list, indicating that no critical issues have been found in these modules. However, we have added warnings for the use of direct filesystem references in specific classes and methods within the cloudpathlib.azure.azblobclient, cloudpathlib.azure.azblobpath, cloudpathlib.cloudpath, cloudpathlib.gs.gsclient, cloudpathlib.gs.gspath, cloudpathlib.local.implementations.azure, cloudpathlib.local.implementations.gs, cloudpathlib.local.implementations.s3, cloudpathlib.s3.s3client, and cloudpathlib.s3.sspath modules. The warning message indicates that the use of direct filesystem references is deprecated and will be removed in a future release. This change addresses a portion of issue #1931.
- Whitelists colorful (#2785). In this release, we have added support for the
colorfullibrary, a Python package for generating ANSI escape codes to colorize terminal output. The library contains several modules, including "ansi", "colors", "core", "styles", "terminal", and "utils", all of which have been whitelisted and added to the "known.json" file. This change resolves issue #1931 and broadens the range of approved libraries that can be used in the project, enabling more flexible and visually appealing terminal output. - Whitelists cymem (#2793). In this release, we have made changes to the known.json file to whitelist the use of the cymem package in our project. This new entry includes sub-entries such as "cymem", "cymem.about", "cymem.tests", and "cymem.tests.test_import", which likely correspond to specific components or aspects of the package that require whitelisting. This change partially addresses issue #1931, which may have been caused by the use or testing of the cymem package. It is important to note that this commit does not modify any existing functionality or add any new methods; rather, it simply grants permission for the cymem package to be used in our project.
- Whitelists dacite (#2795). In this release, we have whitelisted the dacite library in our known.json file. Dacite is a library that enables the instantiation of Python classes with type hints, providing more robust and flexible object creation. By whitelisting dacite, users of our project can now utilize this library in their code without encountering any compatibility issues. This change partially addresses issue #1931, which may have involved dacite or type hinting more generally, thereby enhancing the overall functionality and flexibility of our project for software engineers.
- Whitelists databricks-automl-runtime (#2794). A new change has been implemented to whitelist the
databricks-automl-runtimein the "known.json" file, enabling several nested packages and modules related to Databricks' auto ML runtime for forecasting and hyperparameter tuning. The newly added modules provide functionalities for data preprocessing and model training, including handling time series data, missing values, and one-hot encoding. This modification addresses a portion of issue #1931, improving the library's compatibility with Databricks' auto ML runtime. - Whitelists dataclasses-json (#2792). A new configuration has been added to the "known.json" file, whitelisting the
dataclasses-jsonlibrary, which provides serialization and deserialization functionality to Python dataclasses. This change partially resolves issue #1931 and introduces new methods for serialization and deserialization through this library. Additionally, the librariesmarshmallowand its associated modules, as well as "typing-inspect," have also been whitelisted, adding further serialization and deserialization capabilities. It's important to note that these changes do not affect existing functionality, but instead provide new options for handling these data structures. - Whitelists dbl-tempo (#2791). A new library, dbl-tempo, has been whitelisted and is now approved for use in the project. This library provides functionality related to tempo, including interpolation, intervals, resampling, and utility methods. These new methods have been added to the known.json file, indicating that they are now recognized and approved for use. This change is critical for maintaining backward compatibility and project maintainability. It addresses part of issue #1931 and ensures that any new libraries or methods are thoroughly vetted and documented before implementation. Software engineers are encouraged to familiarize themselves with the new library and its capabilities.
- whitelist blis (#2776). In this release, we have added the high-performance computing library
blisto our whitelist, partially addressing issue #1931. The blis library is optimized for various CPU architectures and provides dense linear algebra capabilities, which can improve the performance of workloads that utilize these operations. With this change, the blis library and its components have been included in our system's whitelist, enabling users to leverage its capabilities. Familiarity with high-performance libraries and their impact on system performance is essential for software engineers, and the addition of blis to our whitelist is a testament to our commitment to providing optimal performance. - whitelists brotli (#2777). In this release, we have partially addressed issue #1931 by adding support for the Brotli data compression algorithm in our project. The Brotli JSON object and an empty array for
brotlihave been added to the "known.json" configuration file to recognize and support its use. This change does not modify any existing functionality or introduce new methods, but rather whitelists Brotli as a supported algorithm for future use in the project. This enhancement allows for more flexibility and options when working with data compression, providing software engineers with an additional tool for optimization and performance improvements.
Dependency updates:
- Updated sqlglot requirement from <25.23,>=25.5.0 to >=25.5.0,<25.25 (#2765).
Contributors: @pritishpai, @ericvergnaud, @JCZuurmond, @asnare, @dependabot[bot], @nfx, @HariGS-DB