Skip to content

Commit f6438aa

Browse files
Update databricks-labs-lsql requirement from ~=0.2.2 to >=0.2.2,<0.4.0 (#1137)
Updates the requirements on [databricks-labs-lsql](https://github.com/databrickslabs/lsql) to permit the latest version. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/databrickslabs/lsql/releases">databricks-labs-lsql's releases</a>.</em></p> <blockquote> <h2>v0.3.0</h2> <ul> <li>Added support for <code>save_table(..., mode=&quot;overwrite&quot;)</code> to <code>StatementExecutionBackend</code> (<a href="https://redirect.github.com/databrickslabs/lsql/issues/74">#74</a>). In this release, we've added support for overwriting a table when saving data using the <code>save_table</code> method in the <code>StatementExecutionBackend</code>. Previously, attempting to use the <code>overwrite</code> mode would raise a <code>NotImplementedError</code>. Now, when this mode is specified, the method first truncates the table before inserting the new rows. The truncation is done using the <code>execute</code> method to run a <code>TRUNCATE TABLE</code> SQL command. Additionally, we've added a new integration test, <code>test_overwrite</code>, to the <code>test_deployment.py</code> file to verify the new <code>overwrite</code> mode functionality. A new option, <code>mode=&quot;overwrite&quot;</code>, has been added to the <code>save_table</code> method, allowing for the existing data in the table to be deleted and replaced with the new data being written. We've also added two new test cases, <code>test_statement_execution_backend_save_table_overwrite_empty_table</code> and <code>test_mock_backend_overwrite</code>, to verify the new functionality. It's important to note that the method signature has been updated to include a default value for the <code>mode</code> parameter, setting it to <code>append</code> by default. This change does not affect the functionality and only provides a more convenient default behavior for users of the method.</li> </ul> <p>Contributors: <a href="https://github.com/william-conti"><code>@​william-conti</code></a></p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/databrickslabs/lsql/blob/main/CHANGELOG.md">databricks-labs-lsql's changelog</a>.</em></p> <blockquote> <h2>0.3.0</h2> <ul> <li>Added support for <code>save_table(..., mode=&quot;overwrite&quot;)</code> to <code>StatementExecutionBackend</code> (<a href="https://redirect.github.com/databrickslabs/lsql/issues/74">#74</a>). In this release, we've added support for overwriting a table when saving data using the <code>save_table</code> method in the <code>StatementExecutionBackend</code>. Previously, attempting to use the <code>overwrite</code> mode would raise a <code>NotImplementedError</code>. Now, when this mode is specified, the method first truncates the table before inserting the new rows. The truncation is done using the <code>execute</code> method to run a <code>TRUNCATE TABLE</code> SQL command. Additionally, we've added a new integration test, <code>test_overwrite</code>, to the <code>test_deployment.py</code> file to verify the new <code>overwrite</code> mode functionality. A new option, <code>mode=&quot;overwrite&quot;</code>, has been added to the <code>save_table</code> method, allowing for the existing data in the table to be deleted and replaced with the new data being written. We've also added two new test cases, <code>test_statement_execution_backend_save_table_overwrite_empty_table</code> and <code>test_mock_backend_overwrite</code>, to verify the new functionality. It's important to note that the method signature has been updated to include a default value for the <code>mode</code> parameter, setting it to <code>append</code> by default. This change does not affect the functionality and only provides a more convenient default behavior for users of the method.</li> </ul> <h2>0.2.5</h2> <ul> <li>Fixed PyPI badge (<a href="https://redirect.github.com/databrickslabs/lsql/issues/72">#72</a>). In this release, we have implemented a fix to the PyPI badge in the README file of our open-source library. The PyPI badge displays the version of the package and serves as a quick reference for users. This fix ensures the accuracy and proper functioning of the badge, without involving any changes to the functionality or methods within the project. Software engineers can be assured that this update is limited to the README file, specifically the PyPI badge, and will not affect the overall functionality of the library.</li> <li>Fixed <code>no-cheat</code> check (<a href="https://redirect.github.com/databrickslabs/lsql/issues/71">#71</a>). In this release, we have made improvements to the <code>no-cheat</code> verification process for new code. Previously, the check for disabling the linter was prone to false positives when the string '# pylint: disable' appeared for reasons other than disabling the linter. The updated code now includes an additional filter to exclude the string <code>CHEAT</code> from the search, and the number of characters in the output is counted using the <code>wc -c</code> command. If the count is not zero, the script will terminate with an error message. This change enhances the accuracy of the <code>no-cheat</code> check, ensuring that the linter is being used correctly and that all new code meets our quality standards.</li> <li>Removed upper bound on <code>sqlglot</code> dependency (<a href="https://redirect.github.com/databrickslabs/lsql/issues/70">#70</a>). In this update, we have removed the upper bound on the <code>sqlglot</code> dependency version in the project's <code>pyproject.toml</code> file. Previously, the version constraint required <code>sqlglot</code> to be at least 22.3.1 but less than 22.5.0. With this modification, there will be no upper limit, enabling the project to utilize any version greater than or equal to 22.3.1. This change provides the project with the flexibility to take advantage of future bug fixes, performance improvements, and new features available in newer <code>sqlglot</code> package versions. Developers should thoroughly test the updated package version to ensure compatibility with the existing codebase.</li> </ul> <h2>0.2.4</h2> <ul> <li>Fixed <code>Builder</code> object is not callable error (<a href="https://redirect.github.com/databrickslabs/lsql/issues/67">#67</a>). In this release, we have made an enhancement to the <code>Backends</code> class in the <code>databricks/labs/lsql/backends.py</code> file. The <code>DatabricksSession.builder()</code> method call in the <code>__init__</code> method has been changed to <code>DatabricksSession.builder</code>. This update uses the <code>builder</code> attribute to create a new instance of <code>DatabricksSession</code> without calling it like a function. The <code>sdk_config</code> method is then used to configure the instance with the required settings. Finally, the <code>getOrCreate</code> method is utilized to obtain a <code>SparkSession</code> object, which is then passed as a parameter to the parent class constructor. This modification simplifies the code and eliminates the error caused by treating the <code>builder</code> attribute as a callable object. Software engineers may benefit from this change by having a more streamlined and error-free codebase when working with the open-source library.</li> <li>Prevent silencing of <code>pylint</code> (<a href="https://redirect.github.com/databrickslabs/lsql/issues/65">#65</a>). In this release, we have introduced a new job, &quot;no-lint-disabled&quot;, to the GitHub Actions workflow for the repository. This job runs on the latest Ubuntu version and checks out the codebase with a full history. It verifies that no new instances of code suppressing <code>pylint</code> checks have been added, by filtering the differences between the current branch and the main branch for new lines of code, and then checking if any of those new lines contain a <code>pylint</code> disable comment. If any such lines are found, the job will fail and print a message indicating the offending lines of code, thereby ensuring that the codebase maintains a consistent level of quality by not allowing linting checks to be bypassed.</li> <li>Updated <code>_SparkBackend.fetch()</code> to return iterator instead of list (<a href="https://redirect.github.com/databrickslabs/lsql/issues/62">#62</a>). In this release, the <code>fetch()</code> method of the <code>_SparkBackend</code> class has been updated to return an iterator instead of a list, which can result in reduced memory usage and improved performance, as the results of the SQL query can now be processed one element at a time. A new exception has been introduced to wrap any exceptions that occur during query execution, providing better debugging and error handling capabilities. The <code>test_runtime_backend_fetch()</code> unit test has been updated to reflect this change, and users of the <code>fetch()</code> method should be aware that it now returns an iterator and must be consumed to obtain the desired data. Thorough testing is recommended to ensure that the updated method still meets the needs of the application.</li> </ul> <h2>0.2.3</h2> <ul> <li>Added support for common parameters in StatementExecutionBackend (<a href="https://redirect.github.com/databrickslabs/lsql/issues/59">#59</a>). The <code>StatementExecutionBackend</code> class in the <code>databricks.labs.lsql</code> package's <code>backends.py</code> file now supports the passing of common parameters through keyword arguments (kwargs). This enhancement allows for greater customization and flexibility in the backend's operation, as the kwargs are passed to the <code>StatementExecutionExt</code> constructor. This change empowers users to control the behavior of the backend, making it more adaptable to various use cases. The key modification in this commit is the addition of the <code>**kwargs</code> parameter in the constructor signature and passing it to <code>StatementExecutionExt</code>, with no changes made to any methods within the class.</li> </ul> <h2>0.2.2</h2> <ul> <li>Updating packages. In this update, the dependencies specified in the pyproject.toml file have been updated to more recent versions. The outdated packages &quot;databricks-labs-blueprint~=0.4.0&quot; and &quot;databricks-sdk~=0.21.0&quot; have been replaced with &quot;databricks-labs-blueprint&gt;=0.4.2&quot; and &quot;databricks-sdk&gt;=0.22.0&quot;, respectively. These updates are expected to bring new features and bug fixes to the software. The dependency <code>sqlglot</code> remains unchanged, with the same version requirement range of &quot;sqlglot&gt;=22.3.1,&lt;22.5.0&quot;. These updates ensure that the software will function as intended, while also taking advantage of the enhancements provided by the more recent versions of the packages.</li> </ul> <h2>0.2.1</h2> <ul> <li>Fixed row converter to properly handle nullable values (<a href="https://redirect.github.com/databrickslabs/lsql/issues/53">#53</a>). In this release, the row converter in the <code>databricks.labs.lsql.core</code> module has been updated to handle nullable values correctly. A new method <code>StatementExecutionExt</code> has been added, which manages the handling of nullable values during SQL statement execution. The <code>Row</code> class has also been modified to include nullable values, improving the robustness and flexibility of SQL execution in dealing with various data types, including null values. These enhancements increase the overall reliability of the system, making it more production-ready.</li> <li>Improved integration test coverage (<a href="https://redirect.github.com/databrickslabs/lsql/issues/52">#52</a>). In this release, the project's integration test coverage has been significantly improved through several changes. A new function, <code>make_random()</code>, has been added to the <code>conftest.py</code> file to generate a random string of fixed length, aiding in the creation of more meaningful and readable random strings for integration tests. A new file, <code>test_deployment.py</code>, has been introduced, containing a test function for deploying a database schema and verifying successful data retrieval via a view. The <code>test_integration.py</code> file has been renamed to <code>test_core.py</code>, with updates to the <code>test_fetch_one</code> function to test the <code>fetch_one</code> method using a SQL query with an aliased value. Additionally, a new <code>Foo</code> dataclass has been added to the <code>tests/integration/views/__init__.py</code> file, supporting integration test coverage. Lastly, a new SQL query has been added to the integration test suite, located in the <code>some.sql</code> file, which retrieves data from a table named <code>foo</code> in the <code>inventory</code> schema. These changes aim to enhance the overall stability, reliability, and coverage of the project's integration tests. Note: The changes to the <code>.gitignore</code> file and the improvements to the <code>StatementExecutionBackend</code> class in the <code>backends.py</code> file are not included in this summary, as they were described in the opening statement.</li> <li>Rely on <code>hatch</code> being present on the build machine (<a href="https://redirect.github.com/databrickslabs/lsql/issues/54">#54</a>). In this release, we have made significant changes to how we manage our build process and toolchain configuration. We have removed the need to manually install <code>hatch</code> version 1.7.0 in the build machine, and instead, rely on its presence, adding it to the list of required tools in the toolchain configuration. The command to create a virtual environment using <code>hatch</code> has also been added, and the <code>pre_setup</code> section no longer includes installing <code>hatch</code>, assuming its availability. We have also updated the <code>hatch</code> package version from 1.7.0 to 1.9.4, which may include bug fixes, performance improvements, or new features. This change may impact the behavior of any existing functionality that relies on <code>hatch</code>. The <code>pyproject.toml</code> file has been modified to update the <code>fmt</code> and <code>verify</code> sections, with <code>ruff check . --fix</code> replacing <code>ruff . --fix</code> and the removal of <code>black --check .</code> and <code>isort . --check-only</code>. A new configuration for <code>isort</code> has also been added to specify the <code>databricks.labs.blueprint</code> package as a known first-party package, enabling more precise management of imports related to that package. These changes simplify the build process and ensure that the project is using a more recent version of the <code>hatch</code> package for packaging and distributing Python projects.</li> <li>Updated sqlglot requirement from ~=22.3.1 to &gt;=22.3.1,<!-- raw HTML omitted -->=22.3.1,&lt;22.5.0<code>. This change enables our project to utilize bug fixes and improvements made in the latest patch versions of </code>sqlglot<code>, while still preventing it from inadvertently using any breaking changes introduced in version 22.5.0 or later versions. This modification allows us to take advantage of the latest features and improvements in </code>sqlglot` while maintaining compatibility and stability in our project.</li> </ul> <p>Dependency updates:</p> <ul> <li>Updated sqlglot requirement from ~=22.3.1 to &gt;=22.3.1,&lt;22.5.0 (<a href="https://redirect.github.com/databrickslabs/lsql/pull/51">#51</a>).</li> </ul> <h2>0.2.0</h2> <ul> <li>Added <code>MockBackend.rows(&quot;col1&quot;, &quot;col2&quot;)[(...), (...)]</code> helper (<a href="https://redirect.github.com/databrickslabs/lsql/issues/49">#49</a>). In this release, we have added a new helper method <code>MockBackend.rows(&quot;col1&quot;, &quot;col2&quot;)[(...), (...)]</code> to simplify testing with <code>MockBackend</code>. This method allows for the creation of rows using a more concise syntax, taking in the column names and a list of values to be used for each column, and returning a list of <code>Row</code> objects with the specified columns and values. Additionally, a <code>__eq__</code> method has been introduced to check if two rows are equal by converting the rows to dictionaries using the existing <code>as_dict</code> method and comparing them. The <code>__contains__</code> method has also been modified to improve the behavior of the <code>in</code> keyword when used with rows, ensuring columns can be checked for membership in the row in a more intuitive and predictable manner. These changes make it easier to test and work with <code>MockBackend</code>, improving overall quality and maintainability of the project.</li> </ul> <h2>0.1.1</h2> <ul> <li>Updated project metadata (<a href="https://redirect.github.com/databrickslabs/lsql/issues/46">#46</a>). In this release, the project metadata has been updated to reflect changes in the library's capabilities and dependencies. The project now supports lightweight SQL statement execution using the Databricks SDK for Python, setting it apart from other solutions. The library size comparison in the documentation has been updated, reflecting an increase in the compressed and uncompressed size of Databricks Labs LightSQL, as well as the addition of a new direct dependency, SQLglot. The project's dependencies and URLs in the <code>pyproject.toml</code> file have also been updated, including a version update for <code>databricks-labs-blueprint</code> and the removal of a specific range for <code>PyYAML</code>.</li> </ul> <p>Dependency updates:</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/databrickslabs/lsql/commit/073c922c96c4952b95e48f6958bc9a308e8ebc1a"><code>073c922</code></a> Release v0.3.0 (<a href="https://redirect.github.com/databrickslabs/lsql/issues/76">#76</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/ca137d1a9d355e8bde0c6b7bd5c3aed7e4dbb14b"><code>ca137d1</code></a> Add support for <code>save_table(..., mode=&quot;overwrite&quot;)</code> to `StatementExecutionBac...</li> <li><a href="https://github.com/databrickslabs/lsql/commit/8921e0f15a95941f8b92fefb4447faf6b9f2da44"><code>8921e0f</code></a> Release v0.2.5 (<a href="https://redirect.github.com/databrickslabs/lsql/issues/73">#73</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/a54c330a2ed8290100220b70a81576bb53c485cd"><code>a54c330</code></a> Fix <code>no-cheat</code> check (<a href="https://redirect.github.com/databrickslabs/lsql/issues/71">#71</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/1c00b608d829b93ded1b695a68d386e0fa5ca445"><code>1c00b60</code></a> Fixed PyPI badge (<a href="https://redirect.github.com/databrickslabs/lsql/issues/72">#72</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/90279bfd4a007d3f43c1ddc3b7e5b43fe769ac53"><code>90279bf</code></a> Remove upper bound on <code>sqlglot</code> dependency (<a href="https://redirect.github.com/databrickslabs/lsql/issues/70">#70</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/b840229a6555e9f485f97a126a3dcca87f004220"><code>b840229</code></a> Release v0.2.4 (<a href="https://redirect.github.com/databrickslabs/lsql/issues/69">#69</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/13f0e934f71e0281e5443b8c3614f5105cc41ccf"><code>13f0e93</code></a> Fix 'Builder' object is not callable error (<a href="https://redirect.github.com/databrickslabs/lsql/issues/67">#67</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/2f3a6aa7bc3fc738266d491351a53d171312ef51"><code>2f3a6aa</code></a> Prevent silencing of <code>pylint</code> (<a href="https://redirect.github.com/databrickslabs/lsql/issues/65">#65</a>)</li> <li><a href="https://github.com/databrickslabs/lsql/commit/0e9ac6a772c26b174c2bb8898367584d4b5802c2"><code>0e9ac6a</code></a> Fixed nightly tests (<a href="https://redirect.github.com/databrickslabs/lsql/issues/63">#63</a>)</li> <li>Additional commits viewable in <a href="https://github.com/databrickslabs/lsql/compare/v0.2.2...v0.3.0">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
1 parent c863619 commit f6438aa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ classifiers = [
4444
"Topic :: Utilities",
4545
]
4646
dependencies = ["databricks-sdk==0.23.0",
47-
"databricks-labs-lsql~=0.2.2",
47+
"databricks-labs-lsql>=0.2.2,<0.4.0",
4848
"databricks-labs-blueprint~=0.4.3",
4949
"PyYAML>=6.0.0,<7.0.0"]
5050

0 commit comments

Comments
 (0)